Golang interview questions and answers for 2025
Golang Interview Questions for Freshers and Intermediate Levels
What are the advantages of using Golang? Why did you choose Golang?
Advantages of Using Golang:
- Performance: Golang offers excellent performance, thanks to its compiled nature and efficient runtime. It’s nearly as fast as C/C++ while being much easier to write and maintain.
- Concurrency: The language’s built-in support for concurrency, through goroutines and channels, makes it ideal for building highly scalable and concurrent systems.
- Simplicity and Readability: Go’s syntax is straightforward and clean, making code easy to read, review, and maintain. This simplicity is particularly beneficial for large teams or open-source projects.
- Standard Library: The comprehensive and well-designed standard library provides tools for handling tasks like HTTP servers, JSON processing, and cryptography without relying on third-party packages.
- Cross-Platform Compilation: Go allows developers to easily compile binaries for multiple platforms, which is invaluable for building applications that need to run on different systems.
- Community and Ecosystem: Go has a thriving community and ecosystem with high-quality libraries, tools, and documentation.
I chose Go because its strong performance and concurrency features make it my go-to choice for backend systems, microservices, and cloud-native applications.
Additionally, Go’s strong opinionation on code style and structure promotes consistency across projects, reducing the cognitive load when working with large teams.
The language’s growing adoption by major companies and its applicability across diverse domains, from web development to DevOps tools, convinced me that Go was the right choice for both personal growth and delivering high-value solutions.
What are the basic data types in Go?
Go provides a rich set of basic data types, including:
- Boolean:
bool
- Integer types:
int
,int8
,int16
,int32
,int64
, and their unsigned counterpartsuint
,uint8
(aliasbyte
),uint16
,uint32
,uint64
- Floating-point types:
float32
,float64
- Complex number types:
complex64
,complex128
- String type:
string
Go also supports derived types such as arrays, slices, structs, pointers, maps, functions, and channels.
How do you declare and initialize a variable in Go?
You can declare variables using the var keyword or the short declaration operator :=.
// Using var keyword
var x int
x = 10
// Declaration with initialization
var y int = 20
// Type inference
var z = 30
// Short variable declaration (only inside functions)
a := 40
What are Go packages and how do you import them?
Packages are the basic units of code organization in Go. Every Go file starts with a package declaration. You can import standard library packages or external packages using the import keyword.
package main
import (
"fmt"
"math/rand"
)
func main() {
fmt.Println("Random number:", rand.Intn(100))
}
Explain the difference between := and var declarations.
:=
is a short variable declaration that can only be used inside functions. It infers the variable’s type automatically and both declares and initializes the variable.var
can be used at the package or function scope. It allows you to specify the type explicitly or rely on type inference.var
is also used for declaring package-level variables and can declare multiple variables at once.
What is a slice in Go?
A slice in Go is a dynamically sized, flexible view into the elements of an underlying array. Unlike arrays, slices do not have a fixed length. They are defined by a pointer to an underlying array, a length, and a capacity. For example:
nums := []int{1, 2, 3, 4}
fmt.Println(nums) // [1 2 3 4]
fmt.Println(len(nums)) // 4
fmt.Println(cap(nums)) // 4
How do you create a slice with make()?
The make function is used to create slices (as well as maps and channels) with a specified length and capacity.
s := make([]int, 5) // length 5, capacity 5
s2 := make([]int, 5, 10) // length 5, capacity 10
How do you copy slices in Go?
Use the built-in copy() function. copy(destination, source) copies the minimum number of elements of the two slices.
src := []int{1, 2, 3}
dest := make([]int, 3)
copy(dest, src)
fmt.Println(dest) // [1 2 3]
What are maps in Go and how do you declare them?
A map in Go is a built-in data structure that associates keys of one type with values of another type. They are implemented as hash tables. Maps are declared and initialized using the make() function or map literals.
m := make(map[string]int)
m["age"] = 30
m2 := map[string]string{
"name": "Alice",
"city": "London",
}
How do you check if a key exists in a map?
Use the “comma ok” idiom:
value, ok := m["age"]
if ok {
fmt.Println("Key exists with value:", value)
} else {
fmt.Println("Key does not exist")
}
What are structs in Go?
A struct in Go is a composite data type that groups together variables under one name. These variables are called fields.
type Person struct {
Name string
Age int
}
p := Person{Name: "Bob", Age: 25}
How do you define methods on structs?
You can define a method by declaring a function with a receiver. The receiver is a variable that appears before the function name.
type Rectangle struct {
Width, Height float64
}
func (r Rectangle) Area() float64 {
return r.Width * r.Height
}
rect := Rectangle{5, 10}
fmt.Println(rect.Area()) // 50
Explain pointers in Go.
A pointer holds the memory address of a variable. In Go, you use & to get the address of a variable and * to dereference a pointer.
x := 10
p := &x
fmt.Println(*p) // prints 10
*p = 20
fmt.Println(x) // prints 20
What is the zero value in Go?
All variables in Go have a zero value if they are not initialized. For example:
int
zero value is0
bool
zero value isfalse
string
zero value is""
(empty string)map
,slice
,chan
,pointer
zero value isnil
What is the difference between new() and make() functions?
new()
allocates memory but does not initialize it. It returns a pointer to the zero value of the given type.make()
is used to initialize slices, maps, and channels. It returns an initialized (not zeroed) value of the chosen type.
How does concurrency work in Go?
Go uses goroutines and channels for concurrency. A goroutine is a lightweight thread managed by the Go runtime. Channels are typed pipes that allow goroutines to communicate and synchronize.
go func() {
fmt.Println("Hello from a goroutine")
}()
What are channels in Go? What is a race condition?
Channels provide a way for goroutines to communicate by sending and receiving values. They ensure synchronization and prevent race conditions. You can declare channels using chan
keyword:
ch := make(chan int)
go func() {
ch <- 10
}()
value := <-ch
fmt.Println(value) // 10
A race condition occurs when multiple goroutines access shared data simultaneously, and at least one modifies it without proper synchronization.
This leads to unpredictable behavior because the program’s outcome depends on the timing of these operations, potentially causing data corruption or bugs.
What is the difference between buffered and unbuffered channels?
- Unbuffered channels: Send and receive operations block until both the sender and the receiver are ready.
- Buffered channels: Have a capacity. Sends do not block until the buffer is full, and receives do not block until the buffer is empty.
ch := make(chan int, 3) // buffered channel with capacity 3
Explain the select statement in Go.
select lets a goroutine wait on multiple communication operations. It blocks until one of its cases can run. This is useful to handle multiple channel operations simultaneously.
select {
case val := <-ch1:
fmt.Println("Received from ch1:", val)
case ch2 <- 42:
fmt.Println("Sent to ch2")
default:
fmt.Println("No communication")
}
How do you handle errors in Go?
Go encourages explicit error handling. Functions often return an error type as the last return value. You check if the error is nil. If not nil, handle it appropriately.
f, err := os.Open("file.txt")
if err != nil {
// handle error
return
}
defer f.Close()
// use f
How does defer work in Go?
defer schedules a function call to be run after the function containing it returns. Deferred calls are executed in Last-In-First-Out order.
func main() {
defer fmt.Println("world")
fmt.Println("hello")
}
This will print:
hello
world
What is a Go interface?
An interface in Go is a type that specifies a method set. If a type implements those methods, it is said to satisfy the interface. Interfaces enable polymorphism in Go.
type Shape interface {
Area() float64
}
Any type that has a method Area() float64 is a Shape.
How do you implement an interface in Go?
There is no implements keyword. In Go, implementing an interface is based on duck typing, which means “if it looks like a duck and quacks like a duck, it’s a duck.” A type implements an interface simply by defining all the methods declared in that interface — no explicit declaration is needed.
type Circle struct {
Radius float64
}
func (c Circle) Area() float64 {
return 3.14 * c.Radius * c.Radius
}
var s Shape = Circle{5}
fmt.Println(s.Area())
Can you have multiple receivers implementing the same interface?
Yes. Any number of types can implement the same interface as long as they define the methods required by that interface.
What is the blank identifier _ in Go?
The blank identifier _ is used to ignore a value. For example, if a function returns two values and you only need one, you can discard the other using _:
val, _ := someFunction()
What is the purpose of the init() function?
init() functions are called before main() executes, typically used for initialization logic like setting up package-level variables. There can be multiple init() functions in a package, and they execute in the order they appear.
How do you run tests in Go?
Go uses the go test command for testing. Test files end with _test.go and contain functions named TestXxx(t *testing.T). Running go test in a package directory automatically discovers and runs these tests.
Explain the Go module system.
Modules are the way Go manages dependencies. A go.mod file defines the module path and tracks dependency versions. go get manages retrieving and updating dependencies. Modules replaced the older GOPATH-based dependency management.
What are goroutines and how are they different from OS threads?
Goroutines are lightweight, managed by the Go runtime, and multiplexed onto OS threads. They start with a small stack and grow as needed. Compared to OS threads, goroutines are more efficient in terms of memory and context-switch overhead, allowing millions of goroutines to run concurrently.
What is the panic and recover mechanism in Go?
panic
: Stops normal execution of the current function and begins unwinding the stack.recover
: Can catch a panic within adefer
call, enabling you to handle the error and prevent the program from crashing.
func f() {
defer func() {
if r := recover(); r != nil {
fmt.Println("Recovered from panic:", r)
}
}()
panic("something went wrong")
}
Golang Interview Questions for Experienced Levels
Explain how garbage collection works in Go.
Go’s garbage collector (GC) uses a concurrent, tri-color mark-and-sweep algorithm that runs in parallel with the application. It marks reachable objects, then sweeps away unreachable ones.
Modern versions of Go have low GC pauses and aim to achieve sub-millisecond pause times. The GC is tuned via environment variables (like GOGC) to control how often it runs based on heap growth.
How would you optimize memory usage in a high-load Go application?
Techniques include:
- Use efficient data structures: Replace large arrays or maps with more memory-efficient alternatives, or preallocate slices when possible.
- Minimize garbage: Reuse buffers, use object pools (e.g.,
sync.Pool
), and avoid unnecessary allocations. - Reduce pointer chasing: Store data contiguously and consider using value types instead of pointers where possible.
- Profiling and analysis: Use
go tool pprof
and memory profiles to identify hotspots and leaks.
What is a memory leak and how to detect it?
A memory leak occurs when a program allocates memory but fails to release it when it’s no longer needed. This happens when references to unused objects are unintentionally retained, preventing the memory from being reclaimed.
Over time, this can cause increased memory usage, degraded performance, or program crashes.
How to Detect a Memory Leak
- Monitoring Memory Usage: Use tools to track memory usage over time. If memory usage keeps growing even when the workload is stable, it could indicate a memory leak.
- Use memory profiling tools like pprof in Go to analyze heap allocations and identify objects that are unexpectedly retained. For example
go tool pprof <binary> <heap-profile>
, which shows which parts of the code are holding onto memory. - Enable garbage collection logs to check memory usage trends. In Go, use the
GODEBUG=gctrace=1
environment variable. - Heap Dumps: Capture and analyze heap dumps to see which objects are still in memory and why they are not being garbage collected.
- Static Analysis: Use tools like
golangci-lint
to catch potential issues in the code that could lead to leaks.
How do you handle concurrency at scale in Go?
- Goroutine patterns: Use worker pools to limit concurrency and avoid creating too many goroutines.
- Channels and synchronization: Properly design channel usage to avoid deadlocks and use sync primitives like
sync.Mutex
,sync.RWMutex
, orsync.Cond
. - Context for cancellation: Pass
context.Context
to goroutines to signal cancellation, preventing resource leaks.
What is a worker pool pattern and how do you implement it in Go?
A worker pool pattern uses a fixed number of goroutines (workers) to process jobs from a channel. This limits the maximum concurrency level and can prevent resource exhaustion.
Example:
jobs := make(chan int, 100)
results := make(chan int, 100)
// Worker function
func worker(jobs <-chan int, results chan<- int) {
for j := range jobs {
// process job j
results <- j * 2
}
}
// Create a fixed number of workers
for w := 1; w <= 5; w++ {
go worker(jobs, results)
}
// Feed jobs
for i := 1; i <= 10; i++ {
jobs <- i
}
close(jobs)
// Collect results
for i := 1; i <= 10; i++ {
<-results
}
How do you debug performance issues in Go applications?
- Profiling tools: Use
pprof
,go test -bench
,go tool trace
, andruntime/pprof
to analyze CPU, memory, and block profiles. - Metrics and logging: Instrument code using
expvar
, Prometheus metrics, or logs to understand performance characteristics. - Benchmarking: Write benchmarks and run
go test -bench .
to isolate slow code paths.
What is the context package used for?
context provides a way to manage deadlines, cancellation signals, and request-scoped data. It’s essential for controlling goroutines in long-running services, ensuring that resources are cleaned up when requests end or time out.
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
How do you implement rate limiting in Go?
You can use the standard library’s time.Ticker
or time.After
to implement token bucket algorithms, or use third-party packages like golang.org/x/time/rate
.
Simple token bucket example:
limiter := rate.NewLimiter(2, 5) // 2 requests/second, burst of 5
if limiter.Allow() {
// handle request
} else {
// rate limit exceeded
}
Explain how to use the sync package (Mutex, RWMutex, WaitGroup, Once).
- Mutex/RWMutex: Protect shared resources by locking critical sections.
sync.Mutex
allows exclusive access, whilesync.RWMutex
allows multiple readers or one writer. - WaitGroup: Wait for a collection of goroutines to finish.
- Once: Run a piece of code exactly once, often used for initialization.
data := make(map[string]int)
var mu sync.Mutex
var wg sync.WaitGroup
// Function to write to the map
writeToMap := func(key string, value int) {
defer wg.Done()
mu.Lock()
data[key] = value
mu.Unlock()
}
// Start goroutines
wg.Add(3)
go writeToMap("alpha", 1)
go writeToMap("beta", 2)
go writeToMap("gamma", 3)
wg.Wait()
fmt.Println("Final Map Data:", data)
How do you prevent deadlocks?
- Consistent locking order: Always lock shared resources in the same order.
- Timeouts and cancellation: Use contexts or channels with timeouts.
- Avoid holding locks for long: Minimize the time a lock is held.
- Careful design: Use
select
statements and try to avoid circular waits.
What are common memory pitfalls in Go and how to avoid them?
- Slicing beyond capacity: Lead to runtime panics. Always check slice indices.
- Memory leaks via goroutines: Not ending goroutines can cause memory leaks. Use contexts and ensure goroutines return.
- Not closing channels: Can block senders forever. Ensure channels are closed when done.
Explain interface internals and how dynamic dispatch works in Go.
Interfaces in Go store two pointers: one to the type’s method table (it’s called an “itab”) and one to the concrete data. Calls through the interface invoke methods via this lookup table, which results in dynamic dispatch at runtime.
How do you ensure backward compatibility with Go modules?
Use semantic versioning. When making breaking changes, increment the major version and use the /v2, /v3, … suffix in the module path.
For example:
module github.com/user/repo/v2
This allows older code to continue using the old version without breaking.
How do you do dependency injection in Go?
Go doesn’t have a built-in DI framework. You typically do dependency injection by passing interfaces or structs as parameters to constructors and functions. Third-party libraries or manual wiring in main() are common approaches.
type UserRepo interface {
GetUser(id string) User
}
func NewUserService(repo UserRepo) *UserService {
return &UserService{repo: repo}
}
How do you implement a graceful shutdown in a Go server?
Use context and os.Signal channel to catch SIGINT or SIGTERM. When the signal is received, initiate a shutdown of the server with a timeout, allowing in-flight requests to finish.
srv := &http.Server{Addr: ":8080"}
go srv.ListenAndServe()
quit := make(chan os.Signal, 1)
signal.Notify(quit, os.Interrupt, syscall.SIGTERM)
<-quit
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
srv.Shutdown(ctx)
How do you efficiently handle JSON encoding and decoding in Go, particularly in high-performance applications?
- Leverage
encoding/json
Features:- Use well-defined structs with appropriate field tags to align JSON keys with expected formats.
- Use
json.Decoder
for processing large JSON payloads incrementally, avoiding the need to load the entire payload into memory.
- Reduce Memory Allocations:
- Reuse buffers with
bytes.Buffer
orsync.Pool
to minimize memory allocations for repeated marshaling/unmarshaling operations. - Preallocate slices or maps when the size is predictable to avoid dynamic resizing.
- Reuse buffers with
- Optimize Encoding and Decoding:
- Implement custom
Marshaler
andUnmarshaler
interfaces when the default behavior isn’t efficient for the application’s requirements. - Use optimized third-party libraries like
json-iterator
oreasyjson
for faster serialization and deserialization.
- Implement custom
- Profile and Benchmark:
- Regularly benchmark JSON processing with
testing
andpprof
to identify and resolve performance bottlenecks. - Avoid excessive use of reflection, as it increases processing time and memory usage.
- Regularly benchmark JSON processing with
- Streamline Error Handling:
- Perform strict input validation during decoding to catch malformed data early.
- Log errors with sufficient context to make debugging easier without impacting application performance.
By applying these strategies, JSON handling can be both performant and reliable, even in high-load scenarios.
Can you explain the difference between json.Marshal and
json.MarshalIndent in Go?
In what scenarios would you use each, and why?
The key difference between json.Marshal
and json.MarshalIndent
lies in their output format:
json.Marshal
: Produces a compact JSON representation without any additional whitespace. It is ideal for scenarios where minimizing the size of the JSON payload is critical, such as when sending data over a network or storing it in a database.json.MarshalIndent
: Produces a human-readable, indented JSON representation by adding whitespace and line breaks. This is useful for debugging, logging, or generating JSON files that need to be easily readable by humans.
Scenarios for Using json.Marshal
:
- High-performance APIs where reducing payload size improves speed and efficiency.
- Compact storage requirements, such as in databases or cache systems.
Scenarios for Using json.MarshalIndent
:
- Logging or debugging to make JSON outputs easier to understand.
- Generating configuration files or API documentation that requires readable JSON formatting.
By choosing the appropriate function based on the context, you can optimize for either performance or readability.
How do you handle large file processing in Go?
- Streaming IO: Use
bufio.Reader
orio.Reader
interfaces to read data in chunks instead of loading it all into memory. - Goroutines and pipelines: Process chunks concurrently.
- Memory mapping (mmap): Use
golang.org/x/exp/mmap
if appropriate, but be mindful of system limitations.
Explain how to use build tags in Go.
Build tags allow conditional compilation. They are placed as comments at the top of a file:
// +build linux
package main
This file only builds on Linux. With Go modules, build tags control which files are included for different platforms or conditions.
How do you integrate C/C++ code with Go?
Use cgo. Add import “C” and comment directives for including C headers. Cgo allows calling C functions from Go code. This should be used sparingly due to complexity and performance overhead.
What is escape analysis and how does it affect performance?
Escape analysis determines whether variables can be allocated on the stack or must “escape” to the heap. If a variable’s lifetime extends beyond the function, or it’s referenced by a closure, it escapes. Minimizing escapes reduces garbage collection overhead and improves performance.
Use go build -gcflags="-m"
to see compiler escape analysis logs.
How do you manage configuration in a Go application?
- Environment variables: Use
os.Getenv
to read configuration from the environment. - Flags: Use the
flag
package for command-line parameters. - Configuration files (JSON, YAML): Parse with
encoding/json
,gopkg.in/yaml.v2
, or other libraries. - Viper: A popular library for managing configurations from multiple sources.
Describe the go toolchain commands
(go build, go run, go test, go get).
go build
: Compiles the packages and dependencies into an executable or object files.go run
: Compiles and runs the program immediately.go test
: Runs tests defined in_test.go
files.go get
: Manages and installs packages and modules from remote repositories (in module mode, nowgo get
andgo install
have slightly different roles).
How do you use time.Duration and manage timeouts in Go?
time.Duration represents the elapsed time. You can use time.Sleep, context.WithTimeout, or select with a time.After channel to manage timeouts.
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
How to handle partial failures in distributed systems using Go? How can I check if two slices are equal?
To check if two slices are equal in Go, you need to consider their length, order, and elements. Here’s a concise and reliable way to compare slices:
import "reflect"
func areSlicesEqual(slice1, slice2 []int) bool {
return reflect.DeepEqual(slice1, slice2)
}
Explanation:
reflect.DeepEqual
checks if the slices are of the same length and contain the same elements in the same order. This method works for slices of any type.
Manual Comparison (for better performance):
For simple types like int
or string
, you can manually compare the slices for improved performance:
func areSlicesEqual(slice1, slice2 []int) bool {
if len(slice1) != len(slice2) {
return false
}
for i := range slice1 {
if slice1[i] != slice2[i] {
return false
}
}
return true
}
Key Points to Consider:
- Ensure the slices are sorted if the order does not matter. Use
sort.Slice
before comparing. - For large slices, consider optimizations or hashing techniques if performance is critical.
By using the appropriate method, you can reliably check for equality based on your specific use case
How do you manage microservices communication in Go?
- HTTP/REST: Use
net/http
or frameworks. - gRPC: Use
google.golang.org/grpc
for binary RPC. - Message queues: Integrate with Kafka, NATS, or RabbitMQ using client libraries.
- Service discovery: Use Consul, etcd, or Kubernetes DNS for dynamic service locations.
How do you handle logging in production?
Use structured logging libraries like zap or logrus. Add correlation IDs, timestamps, and severity levels. Log in JSON for easier parsing by log management systems. Ensure logs don’t overwhelm the system and consider log rotation.
How to safely publish on GitHub a Go module with private dependencies?
- Use Go modules with private repositories:
- Configure
GOPRIVATE
environment variable. - Use SSH keys or token-based auth for private repos.
- Don’t commit credentials.
- Configure
- Document how others can access the private dependencies if they have the right credentials.
Explain the difference between interface{} and generics in Go.
interface{}
(empty interface): Accepts any type, but you lose type safety and must use type assertions or reflection.- Generics (Go 1.18+): Allow defining functions and data structures that can work with any type that satisfies certain constraints, providing type safety at compile time and reducing boilerplate code.
How do you write benchmarks in Go?
Benchmarks are functions named BenchmarkXxx(b *testing.B) in _test.go files. Inside them, run the code you want to measure in a loop b.N times.
Run go test -bench=. to execute benchmarks.
func BenchmarkSum(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = 1 + 2
}
}
What’s new in recent Go releases that improves developer productivity?
- Go generics: Introduced in Go 1.18, enabling reusable data structures and algorithms.
- Faster compilation and smaller binaries: Each Go release refines compiler optimizations.
- Improved standard library: Better support for fuzz testing, new APIs.
- Tooling enhancements:
gopls
improvements, bettergo work
for workspace management.
Coding tasks suitable for Golang interviews
Implement a Merge Sort Function
Implement a merge sort function that takes a slice of integers and returns a new sorted slice using the merge sort algorithm.
package main
func mergeSort(nums []int) []int {
// Your code here
return nil
}
Requirements
- Input
- A slice of integers, e.g.,
[]int{5, 2, 9, 1}
.
- A slice of integers, e.g.,
- Output
- A new slice that contains the integers in sorted order.
- Algorithm Details
- Divide and Conquer:
- Recursively divide the input slice into two halves until each half contains one or no elements.
- Merge the sorted halves to produce the final sorted slice.
- The merging step should ensure elements are in ascending order.
- Divide and Conquer:
- Edge Cases
- Empty Slice: Return an empty slice.
- Single Element: Return the same slice.
- Already Sorted: Ensure the output is identical to the input.
Example Usage
package main
import (
"fmt"
)
func main() {
nums := []int{5, 2, 9, 1, 6, 3}
sorted := mergeSort(nums)
fmt.Println(sorted) // [1, 2, 3, 5, 6, 9]
}
Hints
- Use a helper function to perform the merge step, where two sorted slices are combined into one sorted slice.
- The base case for the recursion is when the input slice has one or no elements.
- Ensure that your implementation works for slices of varying lengths, including empty slices.
LRU Cache
Implement an LRU (Least Recently Used) cache that is safe for concurrent use. The cache should support the following operations:
- Get(key):
- Retrieve the value associated with the key.
- Return
false
if the key is not present in the cache.
- Put(key, value):
- Insert a key-value pair into the cache.
- If the cache reaches its capacity, evict the least recently used item.
Requirements
- The cache should be thread-safe and allow concurrent access.
- Handle scenarios where the cache capacity is exceeded by evicting the least recently used item.
Function Signatures
package main
type LRUCache struct {
// Your code here
}
func NewLRUCache(cap int) *LRUCache {
// Your code here
return nil
}
func (c *LRUCache) Get(key string) (string, bool) {
// Your code here
return "", false
}
func (c *LRUCache) Put(key, value string) {
// Your code here
}
Example Usage
package main
import "fmt"
func main() {
cache := NewLRUCache(2)
cache.Put("a", "1")
cache.Put("b", "2")
fmt.Println(cache.Get("a")) // Output: "1", true
cache.Put("c", "3") // Evicts key "b"
fmt.Println(cache.Get("b")) // Output: "", false
fmt.Println(cache.Get("c")) // Output: "3", true
cache.Put("d", "4") // Evicts key "a"
fmt.Println(cache.Get("a")) // Output: "", false
fmt.Println(cache.Get("d")) // Output: "4", true
}
Notes
- Use a doubly linked list to maintain the order of usage.
- Use a hash map for fast access to nodes in the linked list.
- Ensure thread safety by protecting shared data structures with a synchronization mechanism like
sync.Mutex
.
Worker Pool
Implement a worker pool that processes jobs from a channel using a fixed number of workers. Each worker should:
- Read jobs from a
jobs
channel. - Process the job.
- Send the results to a
results
channel.
Function Signature
package main
func startWorkerPool(numWorkers int, jobs <-chan int, results chan<- int) {
// Your code here
}
Requirements
- Input:
numWorkers
: The number of workers to spawn.jobs
: A channel containing jobs (integers) to be processed.
- Output:
results
: A channel where the results of the job processing are sent.
- Behavior:
- The worker pool should start exactly
numWorkers
goroutines. - Each worker should read from the
jobs
channel, process the job, and send the result to theresults
channel. - When the
jobs
channel is closed, all workers should stop processing. - Close the
results
channel once all jobs are processed.
- The worker pool should start exactly
Example Usage
package main
import "fmt"
func main() {
jobs := make(chan int, 5)
results := make(chan int, 5)
// Add jobs to the jobs channel
for i := 1; i <= 5; i++ {
jobs <- i
}
close(jobs)
// Start the worker pool
startWorkerPool(3, jobs, results)
// Print results from the results channel
for result := range results {
fmt.Println(result)
}
}
Notes
- Each job can represent a task like squaring a number, performing a calculation, or any computational work.
- Ensure all workers terminate gracefully when there are no more jobs to process.
- The results channel should be closed by the worker pool once all results are sent.
Hints
- Use a
sync.WaitGroup
to manage the lifecycle of worker goroutines. - Ensure the
results
channel is only closed after all workers finish processing. - Each worker should operate in its own goroutine to process jobs concurrently.
Token Bucket Rate Limiter
Task Description
Implement a token bucket rate limiter that allows a fixed number of requests per second. The rate limiter should:
- Allow a specified number of requests (
rate
) per second. - Provide a method
Allow() bool
that:- Returns
true
if a request is allowed (a token is available). - Returns
false
if no tokens are available.
- Returns
Function Signatures
package main
type RateLimiter struct {
// Your code here
}
func NewRateLimiter(rate int) *RateLimiter {
// Your code here
return nil
}
func (r *RateLimiter) Allow() bool {
// Your code here
return false
}
Requirements
- Token Bucket Behavior:
- The rate limiter should start with a full bucket of tokens (equal to
rate
). - Each second, the bucket should be refilled with tokens up to the maximum capacity (
rate
). - Requests can only proceed if tokens are available; each request consumes one token.
- The rate limiter should start with a full bucket of tokens (equal to
- Concurrency:
- Ensure thread-safe operation for concurrent calls to
Allow()
.
- Ensure thread-safe operation for concurrent calls to
- Edge Cases:
- Handle scenarios where no requests are made for an extended period (tokens should still refill).
- Ensure tokens do not exceed the maximum capacity (
rate
).
Example Usage
package main
import (
"fmt"
"time"
)
func main() {
rl := NewRateLimiter(2)
// Simulate requests
for i := 0; i < 5; i++ {
fmt.Println(rl.Allow()) // Prints true for the first two requests, then false
time.Sleep(300 * time.Millisecond)
}
time.Sleep(1 * time.Second) // Wait for tokens to refill
fmt.Println(rl.Allow()) // Prints true after refill
fmt.Println(rl.Allow()) // Prints true
fmt.Println(rl.Allow()) // Prints false (exhausted tokens)
close(rl.stopChan) // Stop the refiller goroutine
}
Notes
- Use a ticker to periodically refill the tokens.
- Protect shared data (e.g., token count) with a synchronization mechanism like
sync.Mutex
.- Ensure proper cleanup of any goroutines when the rate limiter is no longer in use.
Hints
- Use a
sync.Mutex
to synchronize access to the token count. - Create a separate goroutine to handle token refills at fixed intervals.
- Use a
chan struct{}
or a similar mechanism to stop the refill goroutine gracefully.
Concurrency-Safe Memoization
Task Description
Implement a concurrency-safe memoization function. Given a function
f(key string) string
, return a new function that:
- Caches results for each key, so subsequent calls with the same key return the cached result.
- Ensures thread safety for concurrent calls to the memoized function.
Function Signature
package main
func Memoize(f func(string) string) func(string) string {
// Your code here
return nil
}
Requirements
- Caching Behavior:
- The returned function should store results for previously computed keys.
- If a key is already cached, return the cached result instead of recomputing.
- Concurrency:
- Ensure the memoization is thread-safe, allowing multiple goroutines to access the memoized function simultaneously without issues.
- Correctness:
- Handle multiple unique keys, ensuring each key is computed and cached correctly.
- Avoid redundant calls to the underlying function
f
for cached keys.
Example Usage
package main
import (
"fmt"
)
func main() {
slowFunc := func(s string) string {
// Simulate slow computation
return s + "-computed"
}
memoized := Memoize(slowFunc)
// First call computes and caches the result
fmt.Println(memoized("test")) // Output: "test-computed"
// Second call retrieves the cached result
fmt.Println(memoized("test")) // Output: "test-computed"
}
Notes
- Use a mutex (e.g.,
sync.Mutex
) to synchronize access to the cache. - Store cached results in a
map[string]string
. - Ensure that the function behaves correctly for both single-threaded and multi-threaded scenarios.
Hints
- Before computing the result for a key, check if it exists in the cache.
- Lock the cache for reading and writing to ensure thread safety.
- Avoid holding the lock while executing the underlying function
f
to prevent deadlocks or reduced concurrency.
Binary Search Tree (BST)
Task Description
Implement a binary search tree with the following methods:
- Insert(val int):
- Inserts a value into the binary search tree.
- Ensures the tree maintains the binary search property:
- Values in the left subtree are less than the current node.
- Values in the right subtree are greater than the current node.
- Duplicate values should not be added.
- Search(val int) bool:
- Searches for a value in the binary search tree.
- Returns
true
if the value is found, otherwise returnsfalse
.
Function Signatures
package main
type BST struct {
val int
left *BST
right *BST
}
func (b *BST) Insert(val int) {
// Your code here
}
func (b *BST) Search(val int) bool {
// Your code here
return false
}
Requirements
- Insert Method:
- Adds the value in the correct position based on the binary search property.
- Avoids adding duplicate values.
- Search Method:
- Recursively checks the left or right subtree depending on the value being searched.
- Returns
false
if anil
node is reached, indicating the value is not in the tree.
- Edge Cases:
- Inserting into an empty tree.
- Searching in an empty tree.
- Duplicate insertions should not modify the tree.
Example Usage
package main
import "fmt"
func main() {
var tree BST
// Insert values into the BST
tree.Insert(5)
tree.Insert(3)
tree.Insert(7)
// Search for values
fmt.Println(tree.Search(3)) // Output: true
fmt.Println(tree.Search(10)) // Output: false
// Insert more values
tree.Insert(4)
tree.Insert(6)
// Validate structure manually or via tests
}
Notes
- Ensure the binary search property is maintained after every insertion.
- Avoid duplicate nodes when inserting a value that already exists in the tree.
- Use recursion to simplify the implementation of both
Insert
andSearch
.
Hints
- Insert Method:
- Check if the value is less than the current node’s value to decide whether to insert into the left subtree.
- Check if the value is greater to decide whether to insert into the right subtree.
- If the node is
nil
, create a new node with the given value.
- Search Method:
- Use recursion to traverse the tree.
- Return
true
when the current node’s value matches the search value. - Return
false
if anil
node is reached.
Min-Heap
Task Description
Implement a min-heap with the following methods:
- Insert(val int):
- Adds an integer to the heap while maintaining the heap property:
- The smallest element is always at the root.
- Each parent node is smaller than its child nodes.
- Adds an integer to the heap while maintaining the heap property:
- ExtractMin() (int, bool):
- Removes and returns the smallest element (root) from the heap.
- Returns
false
if the heap is empty.
Function Signatures
package main
type MinHeap struct {
// Your code here
}
func (h *MinHeap) Insert(val int) {
// Your code here
}
func (h *MinHeap) ExtractMin() (int, bool) {
// Your code here
return 0, false
}
Requirements
- Heap Property:
- The heap should maintain the min-heap property:
- The smallest element is always at the root (
h.data[0]
). - The children of index
i
are at indices2i+1
and2i+2
. - The parent of index
i
is at(i-1)/2
.
- The smallest element is always at the root (
- The heap should maintain the min-heap property:
- Insert Method:
- Add the new element to the end of the heap.
- Restore the heap property by “bubbling up” the element.
- ExtractMin Method:
- Swap the root with the last element, then remove the last element.
- Restore the heap property by “bubbling down” the root element.
- Edge Cases:
- Handle extraction from an empty heap.
- Ensure the heap property is maintained after multiple insertions and extractions.
Example Usage
package main
import "fmt"
func main() {
h := &MinHeap{}
h.Insert(5)
h.Insert(3)
h.Insert(8)
fmt.Println(h.ExtractMin()) // Output: 3, true
fmt.Println(h.ExtractMin()) // Output: 5, true
fmt.Println(h.ExtractMin()) // Output: 8, true
fmt.Println(h.ExtractMin()) // Output: 0, false (heap is empty)
}
Notes
- Use a slice (
[]int
) as the underlying data structure for the heap. - Implement helper functions to “bubble up” and “bubble down” elements as needed.
- Ensure that all operations maintain the min-heap property efficiently.
Hints
- Bubble Up:
- Starting from the last inserted element, compare it with its parent.
- Swap the element with its parent if it is smaller, and continue until the heap property is restored.
- Bubble Down:
- Starting from the root, compare it with its children.
- Swap it with the smaller child if it violates the heap property, and continue until the property is restored.
- Index Calculations:
- Parent index:
(i-1)/2
- Left child index:
2i+1
- Right child index:
2i+2
- Parent index:
Concurrency-Safe Map
Task Description
Implement a concurrency-safe map with the following methods:
- Get(key string) (string, bool):
- Retrieves the value associated with the key.
- Returns
false
if the key does not exist.
- Set(key, value string):
- Stores the value associated with the key.
- Overwrites the value if the key already exists.
- Delete(key string):
- Removes the key-value pair from the map.
- If the key does not exist, the method should handle gracefully.
Function Signatures
package main
type ConcurrentMap struct {
// Your code here
}
func NewConcurrentMap() *ConcurrentMap {
// Your code here
return nil
}
func (m *ConcurrentMap) Get(key string) (string, bool) {
// Your code here
return "", false
}
func (m *ConcurrentMap) Set(key, value string) {
// Your code here
}
func (m *ConcurrentMap) Delete(key string) {
// Your code here
}
Requirements
- Concurrency Safety:
- Use a
sync.RWMutex
to allow multiple readers or a single writer at a time. - Ensure that
Set
andDelete
operations are exclusive and block other operations.
- Use a
- Efficiency:
- Use
RLock
for read operations to allow multiple readers concurrently. - Use
Lock
for write operations to ensure thread safety.
- Use
- Edge Cases:
- Handle retrieval of non-existent keys gracefully.
- Ensure deletion of non-existent keys does not cause errors.
Example Usage
package main
import "fmt"
func main() {
cm := NewConcurrentMap()
cm.Set("foo", "bar")
val, ok := cm.Get("foo")
if ok {
fmt.Println("Value for 'foo':", val) // Output: Value for 'foo': bar
}
cm.Delete("foo")
_, ok = cm.Get("foo")
if !ok {
fmt.Println("'foo' was deleted.") // Output: 'foo' was deleted.
}
}
Notes
- Use a
map[string]string
as the underlying storage for key-value pairs. - Protect all operations on the map with a
sync.RWMutex
to ensure thread safety. - The
Delete
method should handle keys that do not exist gracefully.
Hints
- Get Method:
- Use
RLock
to allow multiple concurrent readers. - Ensure you release the lock after retrieving the value.
- Use
- Set Method:
- Use
Lock
to prevent other readers or writers from accessing the map while updating it.
- Use
- Delete Method:
- Use
Lock
to ensure exclusive access while removing a key.
- Use
- Initialization:
- Provide a constructor
NewConcurrentMap
to initialize the map and mutex properly.
- Provide a constructor
Pipeline with Generator and Filter
Task Description
Implement a pipeline that consists of the following stages:
- Generator:
- Sends a sequence of integers into a channel.
- Operates within its own goroutine.
- Filter:
- Reads integers from an input channel and filters them based on a condition (e.g., keep only even numbers).
- Operates within its own goroutine and sends the filtered results into an output channel.
- Consumer:
- Reads integers from the filtered output channel.
These stages should be composed using channels to demonstrate a pipeline pattern in concurrent Go programming.
Function Signatures
package main
func generator(nums ...int) <-chan int {
// Your code here
return nil
}
func filterEven(in <-chan int) <-chan int {
// Your code here
return nil
}
Requirements
- Generator:
- Accepts a variadic number of integers (
nums ...int
). - Sends each integer into a channel, then closes the channel.
- Accepts a variadic number of integers (
- Filter:
- Accepts an input channel (
<-chan int
). - Sends only even numbers into an output channel, then closes the channel.
- Accepts an input channel (
- Concurrency:
- Each stage (generator and filter) must run in its own goroutine.
- Ensure channels are properly closed when processing is complete.
- Composition:
- Combine these stages into a pipeline where the generator feeds the filter, and the filter feeds the consumer.
Example Usage
package main
import "fmt"
func main() {
// Create a pipeline
gen := generator(1, 2, 3, 4, 5, 6)
even := filterEven(gen)
// Consume the filtered output
for num := range even {
fmt.Println(num) // Output: 2, 4, 6
}
}
Notes
- Use channels to connect the stages of the pipeline.
- Each stage should close its output channel when done.
- Ensure the pipeline can handle empty input gracefully.
Hints
- Generator:
- Create a channel and launch a goroutine to send values into it.
- Close the channel when all values are sent.
- Filter:
- Create an output channel and launch a goroutine to process values from the input channel.
- Close the output channel after all input values are processed.
- Channel Communication:
- Use
range
to iterate over values from a channel. - Ensure channels are closed to signal completion to downstream stages.
- Use
Cycle Detection in Singly Linked List
Task Description
Implement a function to detect if a singly linked list has a cycle. A cycle exists if a node’s
Next
pointer points back to a previous node, creating a loop.
Function Signature
package main
type ListNode struct {
Val int
Next *ListNode
}
func hasCycle(head *ListNode) bool {
// Your code here
return false
}
Requirements
- Input:
head
: The head of a singly linked list (ListNode
).
- Output:
- Returns
true
if a cycle is detected, otherwisefalse
.
- Returns
- Behavior:
- Use Floyd’s Cycle Detection Algorithm (Tortoise and Hare):
- Use two pointers:
slow
moves one step at a time, andfast
moves two steps. - If
slow
andfast
meet, a cycle exists. - If
fast
orfast.Next
becomesnil
, the list does not have a cycle.
- Use two pointers:
- Use Floyd’s Cycle Detection Algorithm (Tortoise and Hare):
Example Usage
package main
import "fmt"
func main() {
n1 := &ListNode{Val: 1}
n2 := &ListNode{Val: 2}
n3 := &ListNode{Val: 3}
n1.Next = n2
n2.Next = n3
// Test without cycle
fmt.Println(hasCycle(n1)) // Output: false
// Create a cycle
n3.Next = n1
fmt.Println(hasCycle(n1)) // Output: true
}
Notes
- Cycle:
- A cycle exists if a node points back to an earlier node in the list.
- Example:
1 -> 2 -> 3 -> 4 -> 2
(cycle starts at node with value 2).
- Algorithm Details:
- Use two pointers (
slow
andfast
) to traverse the list. - If
slow
equalsfast
, a cycle is detected. - If the list ends (
fast == nil
orfast.Next == nil
), there is no cycle.
- Use two pointers (
Edge Cases
- An empty list (
head == nil
). - A single node with no cycle.
- A single node pointing to itself (cycle).
- Multiple nodes with and without cycles.
- Cycles at various positions in the list (beginning, middle, or end).
Hints
- Detecting Cycle:
- Use a two-pointer technique to traverse the list at different speeds.
- If they meet, the list contains a cycle.
- Avoid Infinite Loops:
- Use the fast pointer’s ability to skip nodes to terminate the loop if no cycle exists.
Tarjan’s Algorithm for Strongly Connected Components (SCCs)
Task Description
Implement Tarjan’s Algorithm to find all strongly connected components (SCCs) in a directed graph. The graph is represented as an adjacency list: map[int][]int
.
Function Signature
package main
func tarjanSCC(graph map[int][]int) [][]int {
// Your code here
return nil
}
Requirements
- Input:
graph
: A directed graph represented as an adjacency list where the keys are node IDs and the values are slices of neighboring nodes.
- Output:
- A slice of slices, where each inner slice represents an SCC containing node IDs.
- SCCs can be in any order, but the nodes within each SCC must be grouped together.
- Behavior:
- Use Tarjan’s Algorithm to find SCCs:
- Perform Depth-First Search (DFS).
- Maintain
index
andlow-link
values for each node. - Use a stack to track nodes currently in the DFS stack.
- Identify SCCs when the
low-link
value equals theindex
value for a node.
- Use Tarjan’s Algorithm to find SCCs:
- Complexity:
- Time Complexity: O(V + E), where
V
is the number of nodes andE
is the number of edges.
- Time Complexity: O(V + E), where
Example Usage
package main
import "fmt"
func main() {
graph := map[int][]int{
1: {2},
2: {3},
3: {1, 4},
4: {5},
5: {6, 7},
6: {4},
7: {},
}
fmt.Println(tarjanSCC(graph))
// Output (order of SCCs may vary): [[1, 3, 2], [4, 6], [5], [7]]
}
Notes
- Strongly Connected Components:
- An SCC is a maximal subgraph where every node is reachable from every other node in the subgraph.
- Example:
- For the graph
1 -> 2 -> 3 -> 1
, the SCC is[1, 2, 3]
.
- For the graph
- Algorithm Details:
- Maintain:
index
: Tracks the order of discovery for each node.low-link
: The smallest index reachable from the node.
- Use a stack to manage nodes in the current DFS path.
- Maintain:
Edge Cases
- An empty graph (
graph == nil
). - A graph with no edges (
graph[node] == []
for all nodes). - Disconnected graphs.
- Cycles of varying sizes.
- Single-node SCCs (e.g., nodes with no edges).
Hints
- Initialization:
- Use maps to track
index
andlow-link
values for nodes. - Use a boolean map to track whether a node is currently in the DFS stack.
- Use maps to track
- Algorithm Steps:
- Start DFS from an unvisited node.
- Set its
index
andlow-link
values to the current index. - Push the node onto the stack and mark it as in the stack.
- For each neighbor:
- If unvisited, recursively call DFS and update the node’s
low-link
value. - If in the stack, update the
low-link
value based on the neighbor’sindex
.
- If unvisited, recursively call DFS and update the node’s
- If
low-link[node] == index[node]
, extract an SCC by popping nodes from the stack until the current node is reached.
- Output:
- Collect SCCs during the DFS process and return them at the end.
Dijkstra’s Algorithm for Shortest Paths
Task Description
Implement Dijkstra’s algorithm to find the shortest distances from a given start node to all other nodes in a weighted, directed graph. The graph is represented as an adjacency list of the form map[int][]struct{to, weight int}
.
Function Signature
package main
func dijkstra(graph map[int][]struct{ to, weight int }, start int) map[int]int {
// Your code here
return nil
}
Requirements
- Input:
graph
: A directed graph represented as an adjacency list where:- Keys are node IDs.
- Values are slices of structs containing:
to
: The destination node.weight
: The weight of the edge.
start
: The node from which to calculate shortest distances.
- Output:
- A map where:
- Keys are node IDs.
- Values are the shortest distances from the
start
node to the respective node.
- If a node is not reachable from the
start
, its distance should bemath.MaxInt
.
- A map where:
- Behavior:
- Use a priority queue (min-heap) to always process the node with the smallest known distance.
- Relax edges to update the shortest known distances.
- Return the computed shortest distances for all nodes.
- Complexity:
- Time Complexity: O((V + E) log V), where
V
is the number of nodes andE
is the number of edges.
- Time Complexity: O((V + E) log V), where
Example Usage
package main
import "fmt"
func main() {
graph := map[int][]struct{ to, weight int }{
0: {{to: 1, weight: 4}, {to: 2, weight: 1}},
1: {{to: 3, weight: 1}},
2: {{to: 1, weight: 2}, {to: 3, weight: 5}},
3: {},
}
start := 0
dist := dijkstra(graph, start)
fmt.Println(dist)
// Output:
// map[0:0 1:3 2:1 3:4]
}
Notes
- Algorithm Details:
- Use a priority queue to manage nodes and their current shortest distances.
- Initialize all distances to
math.MaxInt
(infinity), except thestart
node which is set to0
. - Process each node:
- For each neighbor, calculate the distance through the current node.
- If the calculated distance is shorter than the known distance, update it and push the neighbor into the priority queue.
- Edge Cases:
- A graph with no nodes or edges.
- Disconnected graphs where some nodes are unreachable.
- A single node graph.
Hints
- Priority Queue:
- Use a min-heap (
container/heap
) to efficiently retrieve the next node with the smallest distance.
- Use a min-heap (
- Relaxation:
- For each edge
(u, v)
with weightw
:- If
dist[u] + w < dist[v]
, updatedist[v]
todist[u] + w
.
- If
- For each edge
- Initialization:
- Set the distance of all nodes to
math.MaxInt
. - Set the distance of the
start
node to0
.
- Set the distance of all nodes to
Topological Sort for Directed Acyclic Graph (DAG)
Task Description
Implement a function to perform topological sort on a directed acyclic graph (DAG). The function should return one possible ordering of nodes such that for every directed edge (u, v)
, node u
appears before node v
in the ordering.
Function Signature
package main
func topologicalSort(graph map[int][]int) []int {
// Your code here
return nil
}
Requirements
- Input:
graph
: A directed acyclic graph represented as an adjacency list:- Keys are node IDs.
- Values are slices of integers representing neighboring nodes.
- Output:
- A slice of integers representing one valid topological ordering of the nodes.
- Behavior:
- Compute the in-degree (number of incoming edges) for each node.
- Initialize a queue with all nodes that have an in-degree of 0.
- Process nodes in the queue:
- Append the node to the order.
- Decrement the in-degree of its neighbors.
- Add neighbors with an in-degree of 0 to the queue.
- Return the order once all nodes are processed.
- Complexity:
- Time Complexity: O(V + E), where
V
is the number of nodes andE
is the number of edges.
- Time Complexity: O(V + E), where
Example Usage
package main
import "fmt"
func main() {
graph := map[int][]int{
0: {1, 2},
1: {3},
2: {3},
3: {},
}
order := topologicalSort(graph)
fmt.Println(order)
// Output: [0, 1, 2, 3] or [0, 2, 1, 3] (any valid topological order)
}
Notes
- Topological Sort:
- Only applicable to directed acyclic graphs (DAGs).
- A valid ordering ensures that for every edge
(u, v)
,u
appears beforev
.
- Algorithm Details:
- Compute in-degrees for all nodes.
- Use a queue to process nodes with in-degree 0.
- Append nodes to the order and update in-degrees of their neighbors.
Edge Cases
- Empty Graph:
- Input graph has no nodes (
graph == nil
). - Output should be an empty slice.
- Input graph has no nodes (
- Disconnected Graph:
- Graph with multiple nodes but no edges.
- Any permutation of nodes is a valid topological order.
- Single Node:
- Graph contains a single node with no edges.
- The order should contain just that node.
- Multiple Valid Orders:
- Graph with multiple valid topological orders.
- The function can return any valid order.
Hints
- In-degree Calculation:
- Traverse the adjacency list to count the incoming edges for each node.
- Queue Initialization:
- Add nodes with an in-degree of 0 to the queue at the start.
- Order Construction:
- Use a queue to process nodes in in-degree order.
- For each node, decrement the in-degree of its neighbors and enqueue any neighbor whose in-degree becomes 0.
Pub/Sub System
Task Description
Implement a simple Publish/Subscribe (Pub/Sub) system with the following features:
- Subscribers can register to receive messages by subscribing.
- Publishers can send messages, which are delivered to all subscribers.
- The system should support concurrent publishers and subscribers safely.
Function Signatures
package main
type PubSub struct {
// Your code here
}
func NewPubSub() *PubSub {
// Your code here
return nil
}
func (ps *PubSub) Subscribe() <-chan string {
// Your code here
return nil
}
func (ps *PubSub) Publish(msg string) {
// Your code here
}
Requirements
- Concurrency Safety:
- Ensure thread-safe access to the list of subscribers.
- Use synchronization primitives such as
sync.RWMutex
.
- Subscriber Channels:
- Each subscriber receives messages on their own channel.
- Use buffered channels to avoid deadlocks during publishing.
- Publishing:
- Deliver each message to all current subscribers.
- Subscribers added after a message is published should not receive that message.
- Scalability:
- Support multiple concurrent publishers and subscribers efficiently.
Example Usage
package main
import "fmt"
func main() {
ps := NewPubSub()
sub1 := ps.Subscribe()
sub2 := ps.Subscribe()
go func() {
ps.Publish("hello")
}()
fmt.Println(<-sub1) // Output: "hello"
fmt.Println(<-sub2) // Output: "hello"
}
Notes
- Subscription:
- A new subscriber is added by creating a new channel and appending it to the list of subscribers.
- Publishing:
- On publishing a message, iterate over the list of subscribers and send the message to each subscriber’s channel.
- Channel Management:
- Ensure proper handling of channels (e.g., closing channels when necessary).
Edge Cases
- Publishing messages when there are no subscribers.
- Handling subscribers that are added after messages have been published.
- Ensuring no subscriber misses a message when multiple publishers publish concurrently.
- Subscribers unsubscribing or being garbage collected (if implemented).
Hints
- Mutex Usage:
- Use
sync.RWMutex
for protecting the subscriber list:RLock
for read-only access (e.g., publishing).Lock
for modifying the list (e.g., subscribing).
- Use
- Channel Buffering:
- Use buffered channels for subscriber channels to prevent deadlocks if a subscriber is slow to consume messages.
- Iterating Over Subscribers:
- Ensure thread safety when iterating over the list of subscribers during publishing.
Parallel File Processing
Task Description
Implement a function that processes a list of files in parallel using a specified number of worker goroutines. Each worker processes files concurrently by calling a dummy processing function on each file.
Function Signature
package main
func processFiles(files []string, workers int) {
// Your code here
}
Requirements
- Input:
files
: A slice of filenames (strings) to be processed.workers
: The number of worker goroutines to use for concurrent processing.
- Behavior:
- Process all files in the
files
slice by passing each to a predefined dummy function (testProcessFunc
). - Distribute the workload across the specified number of worker goroutines.
- Process all files in the
- Concurrency:
- Use a worker pool pattern:
- Create a channel to pass filenames to workers.
- Spawn the specified number of workers.
- Ensure all files are processed and all workers exit gracefully.
- Use a worker pool pattern:
- Edge Cases:
- If
files
is empty, the function should do nothing. - If
workers
is 0 or negative, default to using 1 worker.
- If
Example Usage
package main
import "fmt"
func main() {
files := []string{"file1", "file2", "file3"}
// Example dummy function
testProcessFunc := func(file string) {
fmt.Println("Processing:", file)
}
// Process files using 3 workers
processFiles(files, 3)
}
Notes
- Worker Pool Pattern:
- Create a fixed number of workers, each consuming tasks from a shared channel.
- Close the channel after sending all files to signal workers to stop.
- Thread Safety:
- If maintaining shared state (e.g., tracking processed files), use synchronization primitives like
sync.Mutex
.
- If maintaining shared state (e.g., tracking processed files), use synchronization primitives like
- Default Behavior:
- If
workers
is invalid (e.g., ≤0), default to a single worker.
- If
Edge Cases
- Empty
files
slice (files == nil
orlen(files) == 0
). - Negative or zero
workers
. - Very large number of files.
- Long-running or blocking dummy processing function.
Hints
- Channel Usage:
- Use a channel to pass filenames to workers.
- Close the channel after sending all filenames.
- Worker Loop:
- Each worker should:
- Continuously read from the channel.
- Exit when the channel is closed.
- Each worker should:
- Synchronization:
- Use
sync.WaitGroup
to ensure all workers finish before returning from the function.
- Use
Example Workflow
- Create a channel to pass filenames.
- Spawn
workers
goroutines, each reading from the channel and processing files. - Send filenames to the channel.
- Close the channel after all filenames are sent.
- Wait for all workers to finish.
Performance Considerations
- Ensure that the number of workers is optimal for the workload.
- Avoid deadlocks by using buffered channels or ensuring timely consumption of messages.
Binary Search
Task Description
Implement a function that performs binary search to find the index of a target integer in a sorted slice. If the target is found, return its index; otherwise, return -1
.
Function Signature
package main
func binarySearch(arr []int, target int) int {
// Your code here
return -1
}
Requirements
- Input:
arr
: A sorted slice of integers.target
: An integer to search for in the slice.
- Output:
- Return the index of
target
if found. - Return
1
iftarget
is not in the slice.
- Return the index of
- Behavior:
- Use a binary search algorithm:
- Start with two pointers,
low
(0) andhigh
(len(arr) – 1). - Calculate
mid = (low + high) / 2
. - Compare
arr[mid]
withtarget
:- If equal, return
mid
. - If
arr[mid] < target
, movelow
tomid + 1
. - If
arr[mid] > target
, movehigh
tomid - 1
.
- If equal, return
- Repeat until
low > high
.
- Start with two pointers,
- Use a binary search algorithm:
- Complexity:
- Time Complexity: O(log n).
- Space Complexity: O(1) (iterative implementation).
Example Usage
package main
import "fmt"
func main() {
arr := []int{1, 2, 3, 4, 5}
fmt.Println(binarySearch(arr, 3)) // Output: 2
fmt.Println(binarySearch(arr, 6)) // Output: -1
}
Notes
- Binary Search:
- Efficient for searching in a sorted slice.
- Works by repeatedly dividing the search space in half.
- Edge Cases:
- Empty slice.
- Target less than the smallest element.
- Target greater than the largest element.
- Target not in the slice but within the range of elements.
Edge Cases
- Empty Slice:
- Input slice is empty (
arr == nil
orlen(arr) == 0
). - Output should be
1
.
- Input slice is empty (
- Single Element:
- Slice has one element:
- If the target matches the element, return its index (0).
- Otherwise, return
1
.
- Slice has one element:
- Duplicates:
- If the slice contains duplicate elements, any valid index of the target is acceptable.
- Not Found:
- Target is outside the range of the smallest and largest elements.
- Target is within the range but not in the slice.
- Boundary Conditions:
- Target matches the first or last element in the slice.
Hints
- Midpoint Calculation:
- Use
mid := low + (high-low)/2
to avoid potential overflow in some environments.
- Use
- Loop Termination:
- Stop the loop when
low > high
.
- Stop the loop when
- Testing for Duplicates:
- Ensure the returned index corresponds to a valid occurrence of the target, even if there are duplicates.
Example Workflow
- Start with the entire range of the slice (
low = 0
,high = len(arr) - 1
). - Calculate the midpoint and compare:
- If
arr[mid] == target
, returnmid
. - If
arr[mid] < target
, discard the left half by updatinglow
. - If
arr[mid] > target
, discard the right half by updatinghigh
.
- If
- Repeat until the range is invalid (
low > high
).
Performance Considerations
- Binary search is optimal for sorted slices.
- Ensure correctness for edge cases like empty slices and single-element slices.
Fan-Out / Fan-In Task
You need to implement two functions in Go to demonstrate a fan-out/fan-in concurrency pattern:
fanOut(in <-chan int, workers int) []<-chan int
fanIn(channels []<-chan int) <-chan int
Task Requirements
fanOut
- Signature:
func fanOut(in <-chan int, workers int) []<-chan int {
// your code here
return nil
}
- Goal:
- Launch
workers
goroutines that collectively process all items from the input channelin
. - Each worker should receive a subset of the input items (ensuring each item is processed exactly once by exactly one worker).
- Each worker outputs processed results to its own channel.
- Launch
- Return: A slice of channels—each one corresponding to a single worker’s output stream.
fanIn
- Signature:
func fanIn(channels []<-chan int) <-chan int {
// your code here
return nil
}
- Goal:
- Merge multiple channels (produced by the
fanOut
workers) into a single output channel. - Once all worker output channels are fully read, the merged output channel should be closed.
- Merge multiple channels (produced by the
Important Details
- Data Transformation:
- Unless otherwise specified, assume the data should not be changed. If an integer
n
enters thein
channel, it should appear unchanged on the final merged output channel.
- Unless otherwise specified, assume the data should not be changed. If an integer
- Distribution Logic:
- Each item from
in
must be processed exactly once. - You can decide how to distribute items among workers (e.g., round-robin).
- Each item from
- Channel Closure:
fanOut
must properly close each worker’s output channel when processing is finished.fanIn
must close its combined output channel once all worker channels are exhausted.
- Parallelism:
- Use goroutines to handle concurrency.
- Each worker should run in a separate goroutine.
Example Usage
func main() {
in := make(chan int)
go func() {
for i := 1; i <= 5; i++ {
in <- i
}
close(in)
}()
// Distribute items among 2 workers
outs := fanOut(in, 2)
// Merge worker outputs
merged := fanIn(outs)
// Print the results
for val := range merged {
fmt.Println(val)
}
}
Hints
- Consider creating a small “dispatcher” goroutine in
fanOut
that reads fromin
and distributes items across worker-specific input channels. - Each worker goroutine should read from its own channel, process the data, and send results to its own output channel.
- In
fanIn
, you can use async.WaitGroup
to track when all worker channels have finished sending data. When all are done, close the merged output channel.
Merging Sorted Channels
Task Description
You need to implement a function:
func mergeSortedChannels(chs []<-chan int) <-chan int {
// Your code here
return nil
}
Requirements
- You are given multiple channels, each producing sorted integers in non-decreasing order.
- Your task is to merge these channels into one output channel that yields all integers from the input channels in a globally sorted order.
- Once you finish reading all input channels, you must close the output channel so that consumers can range over it until completion.
Considerations
- Because each input channel is sorted, you can think of this as merging multiple sorted lists (but in a streaming fashion).
- You need to ensure that no integer is lost or duplicated.
- The final output should be sorted in non-decreasing order.
- Handle edge cases gracefully, including:
- Empty channels (some or all input channels might not produce any values).
- Single channel (merging a single sorted channel is a trivial pass-through).
- Channels of differing lengths.
- Duplicate values across channels should appear correctly in sorted order.
Example Usage
func main() {
ch1 := make(chan int)
ch2 := make(chan int)
ch3 := make(chan int)
// Start producers
go func() {
// Sorted values
for _, v := range []int{1, 4, 7, 10} {
ch1 <- v
}
close(ch1)
}()
go func() {
// Sorted values
for _, v := range []int{2, 2, 6, 9} {
ch2 <- v
}
close(ch2)
}()
go func() {
// Sorted values
for _, v := range []int{3, 5} {
ch3 <- v
}
close(ch3)
}()
// Merge
merged := mergeSortedChannels([]<-chan int{ch1, ch2, ch3})
// Consume merged results
for val := range merged {
fmt.Println(val)
}
// Output should be: 1,2,2,3,4,5,6,7,9,10
}
Hints
- A typical approach might use a min-heap or priority queue (based on the smallest current head among the channels).
- Alternatively, you could “k-way merge” by reading from each channel in a loop, always taking the smallest available next integer.
- Ensure you handle closing the output channel once all input channels are exhausted.
Map-Reduce Pattern
Task Description
You must implement a map-reduce pattern in Go:
func mapReduce(inputs []int, mapper func(int) int, reducer func(int, int) int) int {
// Your code here
return 0
}
Requirements
- Inputs: A slice of integers (
inputs
). - Mapper: A function
func(int) int
that transforms each integer into an intermediate integer result.- Example: squaring the number, incrementing by 1, turning each integer into “1” to count occurrences, etc.
- Reducer: A function
func(int, int) int
that aggregates all intermediate results into a single integer.- Example: sum, product, minimum, maximum, etc.
- Behavior:
- You must apply
mapper
to each element ofinputs
. - Then you must reduce all mapper outputs into one final integer result using
reducer
. - Return this final integer.
- You must apply
- Edge Cases:
- Empty inputs: The behavior might default to 0 or some neutral value, depending on your design.
- Single-element input: The result is simply
mapper(singleValue)
. - Large input: Implementation should handle any size of input array without issues.
Example Usage
func main() {
inputs := []int{1,2,3,4}
mapper := func(x int) int { return x*x } // square
reducer := func(a int, b int) int { return a+b } // sum
result := mapReduce(inputs, mapper, reducer)
fmt.Println("Sum of squares:", result)
// Expected: 1+4+9+16 = 30
}
Hints
- Think of it like two steps:
- Map: transform each item
i -> mapper(i)
. - Reduce: combine all mapped results using
reducer
.
- Map: transform each item
- You can implement it with a simple loop:
- Initialize some accumulator with a sensible “zero value.”
- For each element in inputs, apply
mapper
first, then incorporate the mapped value into your accumulator usingreducer
.
- Decide on how to handle the empty
inputs
. The typical approach might be to return0
for operations like sum, or define a special case for multiplication, etc.
Custom Error Wrapping
Task Description
You need to create a custom error type that wraps another error. Go’s errors
package recognizes wrapped errors if your type implements the Unwrap()
method to return the wrapped cause. The type should also implement the Error()
method.
package main
type MyError struct {
// Your code here, e.g.:
// message string
// cause error
}
func (e *MyError) Error() string {
// Should return a meaningful error message, possibly combining e.message + e.cause
return ""
}
func (e *MyError) Unwrap() error {
// Should return the wrapped cause if present, or nil otherwise
return nil
}
Requirements
- Fields
- You may store an underlying cause (type
error
) and a custom message (typestring
) within your struct.
- You may store an underlying cause (type
Error()
- Returns a string describing the error.
- Typically includes the custom message and possibly details about the cause.
Unwrap()
- Returns the wrapped cause (
error
), if any. If no cause, returnnil
. - This method enables
errors.Is
anderrors.Unwrap
to work correctly on your custom error.
- Returns the wrapped cause (
- Behavior
- The built-in
errors.Unwrap()
must retrieve the underlying error from your custom type. errors.Is(yourError, someOtherError)
should work as expected ifsomeOtherError
is the same as (or part of the chain from) your underlying cause.
- The built-in
- Edge Cases
- Nil cause: If your custom error has no underlying cause,
Unwrap()
should returnnil
. - Empty message: It’s valid to have an empty message, though typically an error message is expected.
- Nil cause: If your custom error has no underlying cause,
Example Usage
package main
import (
"errors"
"fmt"
)
func main() {
orig := errors.New("original error")
myErr := &MyError{
message: "Something went wrong",
cause: orig,
}
fmt.Println("MyError says:", myErr) // calls .Error()
// Unwrap the error to see if we get the original cause
fmt.Println("Unwrapped cause:", errors.Unwrap(myErr))
}
Hints
- Make sure your
MyError
struct fields are exported or at least accessible within the package, if needed by the tests (though the tests might rely purely onError()
andUnwrap()
). - Often, the string returned by
Error()
might look likefmt.Sprintf("%s: %v", e.message, e.cause)
, but you can customize. - If you set
cause
tonil
,Unwrap()
must returnnil
. This is important forerrors.Is
anderrors.Unwrap
usage. - Pay attention to clarity. A common pattern is:
func (e *MyError) Error() string {
if e.cause != nil {
return fmt.Sprintf("%s: %v", e.message, e.cause)
}
return e.message
}
Line-by-Line File Processing
Task Description
You need to implement a function that reads a file line by line using a buffered approach and processes each line as it is read, without loading the entire file into memory.
func processFile(filename string) error {
// Your code here:
// 1. Open the file
// 2. Create a buffered reader (e.g., bufio.NewReader or bufio.NewScanner)
// 3. Iterate through each line
// 4. Process the line (e.g., print/log/store it)
// 5. Close the file
return nil
}
Requirements
- Open the File
- Return an error if the file cannot be opened.
- Read Line by Line
- Use a buffered reader (e.g.,
bufio.Scanner
orbufio.Reader
) to read each line. - Do not store the entire file’s contents in memory at once.
- Use a buffered reader (e.g.,
- Process Each Line
- After reading a line, do some form of “processing” (printing, logging, or saving to a data structure).
- The provided tests assume you can handle any lines. Real implementation details can vary.
- Close the File
- Ensure the file is closed when you’re done.
- Error Handling
- If reading fails, return the error.
- If the file doesn’t exist, return an error (e.g.,
os.ErrNotExist
).
Example Usage
package main
import "fmt"
func main() {
err := processFile("example.txt")
if err != nil {
fmt.Println("Failed to process file:", err)
return
}
fmt.Println("File processed successfully")
}
Hints
- Use
os.Open(filename)
to get anos.File
. - Pass that to a
bufio.Scanner
orbufio.NewReader
. - Loop over each line with
scanner.Scan()
orreader.ReadString('\\n')
. - Handle potential partial lines (EOF without a newline).
- Be sure to
defer file.Close()
after successfully opening.
Generic Slice Filtering
Task Description
Using Go generics, implement a function Filter
that:
package main
func Filter[T any](slice []T, pred func(T) bool) []T {
// Your code here
return nil
}
Requirements
- Input
- A slice of any type
T
. - A predicate function
func(T) bool
.
- A slice of any type
- Output
- A new slice containing only the elements for which
pred
returnstrue
. - Retain the order of elements that satisfy the predicate.
- A new slice containing only the elements for which
- Constraints & Behavior
- Do not modify the original slice in place.
- Return an empty slice if no elements match or if the original slice is empty.
- Maintain type safety using Go generics.
- Edge Cases
- Empty slice => returns empty slice.
- No elements match => returns empty slice.
- All elements match => returns a new slice identical to the original.
Example Usage
package main
import (
"fmt"
)
func main() {
nums := []int{1, 2, 3, 4, 5}
isOdd := func(x int) bool {
return x%2 != 0
}
oddNums := Filter(nums, isOdd)
fmt.Println(oddNums) // [1, 3, 5]
Hints
- Use a simple loop to accumulate elements for which
pred(element)
istrue
. - Since
T
is a generic type, your function should work equally for[]int
,[]string
, or other slice types. - Return a new slice; do not mutate or reorder the original slice.
Cancel through Context
Task Description
You need to write a function:
package main
import "context"
func longOperation(ctx context.Context) {
// Your code here
}
Requirements
- Long Operation
- The function simulates a time-consuming (long-running) task.
- It should run asynchronously if needed, but at least it must respond to context cancellation.
- Use
context.Context
- Honor the
ctx.Done()
channel to detect cancellation or timeout. - Terminate the operation as soon as
ctx.Done()
is signaled, rather than continuing to completion.
- Honor the
- Behavior
- If the context is not canceled, the function should simulate completing its work.
- If the context is canceled or times out, the function should stop early and return promptly.
- Edge Cases
- Immediate cancellation: If the context is canceled before starting, your function should return quickly.
- Long running: The function might have loops or periodic checks for
ctx.Err()
.
Example Usage
package main
import (
"context"
"fmt"
"time"
)
func main() {
// Example: Start a long operation with a timeout
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
longOperation(ctx)
fmt.Println("Operation done (either completed or canceled)")
}
Hints
- A typical pattern is something like:
for {
// do partial work
select {
case <-ctx.Done():
// context canceled or timed out
return
default:
// continue
}
}
- If your operation does smaller sub-steps or sleeps, just ensure each iteration checks
ctx.Done()
. - Use
context.WithCancel
,context.WithTimeout
, orcontext.WithDeadline
in tests to confirm cancellation/timeout. - Make sure to exit cleanly and not block if the context is canceled.
0/1 Knapsack with Dynamic Programming
Task Description
Implement a 0/1 knapsack function:
package main
func knapsack(weights, values []int, capacity int) int {
// Your code here
return 0
}
Requirements
- Inputs
weights
: Slice of item weights (each corresponds to an item).values
: Slice of item values, same length asweights
.capacity
: Maximum weight capacity of the knapsack.
- Output
- Integer representing the maximum total value that can be achieved without exceeding
capacity
.
- Integer representing the maximum total value that can be achieved without exceeding
- Constraints
- This is the classic 0/1 knapsack problem, meaning each item may be taken or not taken (no partial item).
- Approach
- Use dynamic programming to compute the best solution:
- Typically a 2D table approach:
dp[i][w]
storing the best possible value using items up toi
with capacityw
. - Or a 1D rolling array optimization is also acceptable.
- Typically a 2D table approach:
- Use dynamic programming to compute the best solution:
- Edge Cases
- Empty
weights
/values
: The result is 0. - Zero capacity: No items can be taken, so result is 0.
- All items fit: Then result is the sum of all
values
. - Large capacity with big or small items.
- Weights and values must have the same length (may assume the caller ensures this).
- Empty
Example Usage
package main
import "fmt"
func main() {
weights := []int{1, 3, 4}
values := []int{15, 50, 60}
capacity := 4
maxVal := knapsack(weights, values, capacity)
fmt.Println("Max knapsack value:", maxVal) // Should print 65 for items (1 & 2)
}
Hints
- A common DP solution is:
- Initialize a table
dp[len(weights)+1][capacity+1]
to 0. - For each item
i
, for each possible weightw
from 0..capacity:- Either you don’t take item
i
, sodp[i][w] = dp[i-1][w]
. - Or you take item
i
(if it fits), sodp[i][w] = dp[i-1][w - weights[i]] + values[i]
. - Then
dp[i][w] = max(of those two)
.
- Either you don’t take item
- Return
dp[len(weights)][capacity]
.
- Initialize a table
- You can also optimize space usage to 1D, iterating weights from high to low to avoid overwriting needed subproblems.
Generating All Permutations
Task Description
Implement a function that generates all permutations of a given slice of integers:
package main
func permutations(nums []int) [][]int {
// Your code here
return nil
}
Requirements
- Input
- A slice of integers (e.g.,
[1,2,3]
).
- A slice of integers (e.g.,
- Output
- All unique permutations of the input slice, each permutation represented as a slice of integers.
- The function returns a slice of these permutations (a
[][]int
).
- Behavior
- Permutations are the distinct arrangements of the given elements.
- For
n
distinct elements, there aren!
permutations. - If duplicates exist in
nums
, permutations containing the same elements in the same order should not be repeated multiple times.
- Edge Cases
- Empty slice => The only “permutation” is the empty slice
[]
. Return[[]]
. - Single element => Return
[ [elem] ]
. - Duplicates => Must avoid generating the same permutation multiple times.
- Empty slice => The only “permutation” is the empty slice
- Example
- Input:
[1, 2, 3]
- Output:
[ [1,2,3], [1,3,2], [2,1,3], [2,3,1], [3,1,2], [3,2,1] ]
(order can vary).
- Output:
- Input:
Example Usage
package main
import "fmt"
func main() {
data := []int{1, 2, 3}
perms := permutations(data)
fmt.Println("Permutations of", data, "are:", perms)
}
Hints
- You can generate permutations by:
- Recursive approach:
- Swap elements in the slice, recurse, swap back.
- Keep track of used elements or do a backtracking approach.
- Use a helper that builds the result by picking each element in turn, recursing on the remainder.
- Handle duplicates by sorting the input and skipping repeated elements in the recursion or by using a
map
/set
approach.
- Recursive approach:
- Make sure to avoid generating the same permutation multiple times if the input contains duplicates.
- The test includes a case with duplicates (
[1, 1, 2]
). You must ensure your solution returns each distinct arrangement exactly once.
Concurrency-Safe FIFO Queue
Task Description
You need to implement a FIFO queue (SafeQueue
) that supports concurrent access from multiple goroutines. The queue should offer two methods:
type SafeQueue struct {
// Your code here
}
func (q *SafeQueue) Enqueue(val interface{}) {
// Your code here
}
func (q *SafeQueue) Dequeue() (interface{}, bool) {
// Your code here
return nil, false
}
Requirements
- FIFO Behavior
- First item Enqueued should be the first item Dequeued.
- Concurrency Safety
- Multiple goroutines may call
Enqueue
orDequeue
at the same time. - Must avoid race conditions. Usually, this is done with a
sync.Mutex
, or a channel-based approach, orsync.Cond
. Dequeue
must not panic if the queue is empty; it should return(nil, false)
or an equivalent indicator.
- Multiple goroutines may call
- Return Values
Enqueue(val interface{})
: no return, just addval
to the end.Dequeue() (interface{}, bool)
: returns(theItem, true)
if successful, or(nil, false)
if the queue is empty.
- Edge Cases
- Empty queue on dequeue => return
(nil, false)
. - Concurrency with large number of
Enqueue
andDequeue
calls. - Ensuring no data lost or duplicated.
- Empty queue on dequeue => return
Example Usage
package main
func main() {
var q SafeQueue
q.Enqueue("hello")
q.Enqueue("world")
val, ok := q.Dequeue()
// val => "hello", ok => true
val, ok = q.Dequeue()
// val => "world", ok => true
val, ok = q.Dequeue()
// val => nil, ok => false (empty queue)
}
Hints
- A simple approach is to store items in a slice:
- Use a
sync.Mutex
to guard access to this slice. Enqueue
appends to the slice.Dequeue
removes from the front if not empty.
- Use a
- Watch out for slice re-slicing or shifting overhead if you remove from the front. An alternative is to keep front/back indexes.
- Ensure
Dequeue
returns immediately when empty rather than blocking. (Unless a blocking approach is desired, but typically the specification just returns(nil, false)
).
Trie Implementation
Task Description
You need to implement a Trie data structure with the following methods:
package main
type Trie struct {
// Your code here
}
func (t *Trie) Insert(word string) {
// Your code here
}
func (t *Trie) Search(word string) bool {
// Your code here
return false
}
Requirements
Insert(word string)
- Inserts the string
word
into the trie.
- Inserts the string
Search(word string) bool
- Returns
true
ifword
exists in the trie,false
otherwise.
- Returns
- Data Structure
- The trie typically has a root node that may contain up to 26 children (for a basic a-z scenario) or a map of runes if handling Unicode or arbitrary characters.
- Each child node may represent the next character in the path.
- One common approach is to store a boolean
endOfWord
(orisTerminal
) flag in nodes to mark complete words.
- Edge Cases
- Empty String: Decide if you allow an empty string to be inserted and searched. If so, handle carefully in your data structure.
- Duplicate Inserts: Inserting the same word multiple times shouldn’t break anything. Searching should still return
true
. - Partial Matches: Searching for a prefix (e.g., “app” when only “apple” is inserted) should return
false
unless you consider prefixes as valid words in your design.
- Behavior
- The tests assume standard trie behavior: partial prefixes do not count as found words unless explicitly inserted.
Search
must traverse the trie nodes based on each character; if any link is missing, or if the final node isn’t marked as end of a word, returnfalse
.
Example Usage
package main
import "fmt"
func main() {
trie := &Trie{}
trie.Insert("hello")
fmt.Println(trie.Search("hello")) // true
fmt.Println(trie.Search("hel")) // false
}
Hints
- Node Structure:
- A node might have a map or fixed array of children.
- A boolean
endOfWord
to mark if a node completes a word.
- Insert:
- Traverse from the root node, create child nodes if missing, then mark endOfWord on the final node.
- Search:
- Traverse from the root node, following each character’s child link.
- If at any point the link doesn’t exist, return
false
. - At the end, check if the final node’s
endOfWord
istrue
.
Fenwick (Binary Indexed) Tree
Task Description
You need to implement a Fenwick Tree (also called a Binary Indexed Tree, or BIT) for prefix sums:
package main
type Fenwick struct {
// Your code here
}
func NewFenwick(size int) *Fenwick {
// Your code here
return nil
}
func (f *Fenwick) Update(i, delta int) {
// Your code here
}
func (f *Fenwick) PrefixSum(i int) int {
// Your code here
return 0
}
Requirements
- Fenwick Tree Structure
- Backed by an array (often 1-indexed) where each element covers a range of indices determined by its last set bit.
- Typically
tree[i]
stores the partial sum of elements fromi - (i & -i) + 1
throughi
inclusive.
Update(i, delta int)
- Increases the value at index
i
bydelta
. - Then propagate that update to all relevant Fenwick tree nodes.
- Increases the value at index
PrefixSum(i int) int
- Returns the sum of elements from index
1
up to indexi
(inclusive). - Traverses the Fenwick tree array in a way that accumulates partial sums.
- Returns the sum of elements from index
- 1-Based Indexing
- Often Fenwick trees are implemented using 1-based indexing. For a tree of size
n
, valid indices are1
..n
. - If your input is 0-based, you may need to adjust by +1.
- Often Fenwick trees are implemented using 1-based indexing. For a tree of size
- Complexities
- Both
Update
andPrefixSum
should work in O(log n) time.
- Both
- Edge Cases
- Size=1: minimal tree.
- Index out of bounds: Usually the user of the tree is expected to remain in
[1..size]
. - Negative or zero updates are allowed and should function normally.
Example Usage
package main
import "fmt"
func main() {
fenw := NewFenwick(5)
fenw.Update(1, 2)
fenw.Update(3, 5)
// PrefixSum(3) => 2 (index 1) + 5 (index 3) = 7
fmt.Println("PrefixSum(3) =", fenw.PrefixSum(3))
}
Hints
- For a Fenwick tree, the typical code patterns are:
Update(i, delta)
:
for i <= size {
tree[i] += delta
i += i & (-i)
}
PrefixSum(i):
result := 0
for i > 0 {
result += tree[i]
i -= i & (-i)
}
return result
Make sure you handle 1-based indexing consistently.
Graceful Shutdown Server
Task Description
You must implement a server that:
- Listens for incoming connections or requests (e.g., HTTP server).
- Gracefully shuts down upon receiving
SIGINT
orSIGTERM
signals. - Prints a message (e.g. “Shutting down gracefully…”) before exiting.
package main
func runServer() {
// Your code here
// 1. Start a server (e.g., an http.Server on some port).
// 2. Listen for OS signals SIGINT/SIGTERM in a separate goroutine or use signal.Notify.
// 3. When signal is received, shut down the server gracefully (e.g. server.Shutdown(ctx)).
// 4. Print "Shutting down gracefully" or any final message.
}
Round-Robin Load Balancer
Task Description
Implement a round-robin load balancer with a list of server endpoints. Each time Next()
is called, return the next endpoint in the list, cycling back to the first endpoint after reaching the end.
package main
type RoundRobin struct {
// Your code here (e.g. slice of endpoints, current index, etc.)
}
func NewRoundRobin(endpoints []string) *RoundRobin {
// Your code here
return nil
}
func (r *RoundRobin) Next() string {
// Your code here
return ""
}
Requirements
- Endpoints
- A slice of server endpoints, e.g.
[]string{"server1", "server2", "server3"}
. - If the slice is empty, decide how to handle
Next()
(return an empty string, or handle differently).
- A slice of server endpoints, e.g.
- Round-Robin Behavior
- Each call to
Next()
should return the next endpoint in a cycle. - After the last endpoint, wrap around to the first.
- Each call to
- Thread Safety
- (Optional) If concurrency is needed, you might protect the current index with a mutex.
- The tests here do not specifically test concurrency, so a basic approach is fine.
- Edge Cases
- Single Endpoint: Repeatedly return the same endpoint.
- Empty Endpoints:
Next()
should return""
.
Example Usage
package main
import "fmt"
func main() {
lb := NewRoundRobin([]string{"s1", "s2", "s3"})
for i := 0; i < 5; i++ {
fmt.Println(lb.Next())
// s1, s2, s3, s1, s2 ...
}
}
Hints
- Keep a current index. Each call to
Next()
returns the endpoint atcurrent index
, then increments the index modulo the length ofendpoints
.
Golang Developer hiring resources
Our clients
Popular Golang Development questions
Will AI replace Golang developers?
AI will not replace Golang developers. Developers are essential for creating complex architectures, optimizing code, and understanding system requirements. AI may assist in coding tasks, but human expertise ensures application reliability and scalability.
Is Golang front-end or back-end?
Golang is primarily a back-end language. It is widely used for building APIs, server-side logic, and distributed systems. While it can handle some front-end tasks through WebAssembly, its main focus is on back-end development.
What is Golang used for?
Golang, or Go, is used for building scalable and efficient applications, including web servers, microservices, network tools, and cloud-native applications. Its performance, simplicity, and concurrency model make it ideal for high-performance systems and modern distributed architectures.
What is the difference between Go and C++?
Go is simpler and easier to learn than C++, with a focus on readability and concurrency. C++ offers more control over system resources and is better for low-level programming, while Go excels in modern web applications, microservices, and cloud-native environments due to its built-in concurrency and garbage collection.
What are the advantages of using Golang for web development?
Golang’s advantages for web development include its high performance, simplicity, and scalability. Its built-in HTTP library makes it easy to create web servers, and its concurrency model ensures efficient handling of multiple requests. Go’s compiled nature results in fast execution and deployment, making it ideal for modern web applications.
Interview Questions by role
Interview Questions by skill
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions
Interview Questions