Golang interview questions and answers for 2025

hero image

Golang Interview Questions for Freshers and Intermediate Levels

1.

What are the advantages of using Golang? Why did you choose Golang?

Answer

Advantages of Using Golang:

  1. Performance: Golang offers excellent performance, thanks to its compiled nature and efficient runtime. It’s nearly as fast as C/C++ while being much easier to write and maintain.
  2. Concurrency: The language’s built-in support for concurrency, through goroutines and channels, makes it ideal for building highly scalable and concurrent systems.
  3. Simplicity and Readability: Go’s syntax is straightforward and clean, making code easy to read, review, and maintain. This simplicity is particularly beneficial for large teams or open-source projects.
  4. Standard Library: The comprehensive and well-designed standard library provides tools for handling tasks like HTTP servers, JSON processing, and cryptography without relying on third-party packages.
  5. Cross-Platform Compilation: Go allows developers to easily compile binaries for multiple platforms, which is invaluable for building applications that need to run on different systems.
  6. Community and Ecosystem: Go has a thriving community and ecosystem with high-quality libraries, tools, and documentation.

I chose Go because its strong performance and concurrency features make it my go-to choice for backend systems, microservices, and cloud-native applications.

 

Additionally, Go’s strong opinionation on code style and structure promotes consistency across projects, reducing the cognitive load when working with large teams.

The language’s growing adoption by major companies and its applicability across diverse domains, from web development to DevOps tools, convinced me that Go was the right choice for both personal growth and delivering high-value solutions.

2.

What are the basic data types in Go?

Answer

Go provides a rich set of basic data types, including:

  • Boolean: bool
  • Integer types: int, int8, int16, int32, int64, and their unsigned counterparts uint, uint8 (alias byte), uint16, uint32, uint64
  • Floating-point types: float32, float64
  • Complex number types: complex64, complex128
  • String type: string

Go also supports derived types such as arrays, slices, structs, pointers, maps, functions, and channels.

3.

How do you declare and initialize a variable in Go?

Answer

You can declare variables using the var keyword or the short declaration operator :=.

// Using var keyword
var x int
x = 10

// Declaration with initialization
var y int = 20

// Type inference
var z = 30

// Short variable declaration (only inside functions)
a := 40
4.

What are Go packages and how do you import them?

Answer

Packages are the basic units of code organization in Go. Every Go file starts with a package declaration. You can import standard library packages or external packages using the import keyword.

package main

import (
"fmt"
"math/rand"
)

func main() {
fmt.Println("Random number:", rand.Intn(100))
}
5.

Explain the difference between := and var declarations.

Answer
  • := is a short variable declaration that can only be used inside functions. It infers the variable’s type automatically and both declares and initializes the variable.
  • var can be used at the package or function scope. It allows you to specify the type explicitly or rely on type inference. var is also used for declaring package-level variables and can declare multiple variables at once.
6.

What is a slice in Go?

Answer

A slice in Go is a dynamically sized, flexible view into the elements of an underlying array. Unlike arrays, slices do not have a fixed length. They are defined by a pointer to an underlying array, a length, and a capacity. For example:

nums := []int{1, 2, 3, 4}
fmt.Println(nums) // [1 2 3 4]
fmt.Println(len(nums)) // 4
fmt.Println(cap(nums)) // 4
7.

How do you create a slice with make()?

Answer

The make function is used to create slices (as well as maps and channels) with a specified length and capacity.

s := make([]int, 5) // length 5, capacity 5
s2 := make([]int, 5, 10) // length 5, capacity 10
8.

How do you copy slices in Go?

Answer

Use the built-in copy() function. copy(destination, source) copies the minimum number of elements of the two slices.

src := []int{1, 2, 3}
dest := make([]int, 3)
copy(dest, src)
fmt.Println(dest) // [1 2 3]
9.

What are maps in Go and how do you declare them?

Answer

A map in Go is a built-in data structure that associates keys of one type with values of another type. They are implemented as hash tables. Maps are declared and initialized using the make() function or map literals.

m := make(map[string]int)
m["age"] = 30

m2 := map[string]string{
"name": "Alice",
"city": "London",
}
10.

How do you check if a key exists in a map?

Answer

Use the “comma ok” idiom:

value, ok := m["age"]
if ok {
fmt.Println("Key exists with value:", value)
} else {
fmt.Println("Key does not exist")
}
11.

What are structs in Go?

Answer

A struct in Go is a composite data type that groups together variables under one name. These variables are called fields.

type Person struct {
Name string
Age int
}

p := Person{Name: "Bob", Age: 25}
12.

How do you define methods on structs?

Answer

You can define a method by declaring a function with a receiver. The receiver is a variable that appears before the function name.

type Rectangle struct {
Width, Height float64
}

func (r Rectangle) Area() float64 {
return r.Width * r.Height
}

rect := Rectangle{5, 10}
fmt.Println(rect.Area()) // 50
13.

Explain pointers in Go.

Answer

A pointer holds the memory address of a variable. In Go, you use & to get the address of a variable and * to dereference a pointer.

x := 10
p := &x
fmt.Println(*p) // prints 10
*p = 20
fmt.Println(x) // prints 20
14.

What is the zero value in Go?

Answer

All variables in Go have a zero value if they are not initialized. For example:

  • int zero value is 0
  • bool zero value is false
  • string zero value is "" (empty string)
  • map, slice, chan, pointer zero value is nil
15.

What is the difference between new() and make() functions?

Answer
  • new() allocates memory but does not initialize it. It returns a pointer to the zero value of the given type.
  • make() is used to initialize slices, maps, and channels. It returns an initialized (not zeroed) value of the chosen type.
16.

How does concurrency work in Go?

Answer

Go uses goroutines and channels for concurrency. A goroutine is a lightweight thread managed by the Go runtime. Channels are typed pipes that allow goroutines to communicate and synchronize.

go func() {
fmt.Println("Hello from a goroutine")
}()
17.

What are channels in Go? What is a race condition?

Answer

Channels provide a way for goroutines to communicate by sending and receiving values. They ensure synchronization and prevent race conditions. You can declare channels using chan keyword:

ch := make(chan int)
go func() {
ch <- 10
}()
value := <-ch
fmt.Println(value) // 10

A race condition occurs when multiple goroutines access shared data simultaneously, and at least one modifies it without proper synchronization.

 

This leads to unpredictable behavior because the program’s outcome depends on the timing of these operations, potentially causing data corruption or bugs.

18.

What is the difference between buffered and unbuffered channels?

Answer
  • Unbuffered channels: Send and receive operations block until both the sender and the receiver are ready.
  • Buffered channels: Have a capacity. Sends do not block until the buffer is full, and receives do not block until the buffer is empty.
ch := make(chan int, 3) // buffered channel with capacity 3
19.

Explain the select statement in Go.

Answer

select lets a goroutine wait on multiple communication operations. It blocks until one of its cases can run. This is useful to handle multiple channel operations simultaneously.

select {
case val := <-ch1:
fmt.Println("Received from ch1:", val)
case ch2 <- 42:
fmt.Println("Sent to ch2")
default:
fmt.Println("No communication")
}
20.

How do you handle errors in Go?

Answer

Go encourages explicit error handling. Functions often return an error type as the last return value. You check if the error is nil. If not nil, handle it appropriately.

f, err := os.Open("file.txt")
if err != nil {
// handle error
return
}
defer f.Close()
// use f
21.

How does defer work in Go?

Answer

defer schedules a function call to be run after the function containing it returns. Deferred calls are executed in Last-In-First-Out order.

func main() {
defer fmt.Println("world")
fmt.Println("hello")
}

This will print:

hello
world
22.

What is a Go interface?

Answer

An interface in Go is a type that specifies a method set. If a type implements those methods, it is said to satisfy the interface. Interfaces enable polymorphism in Go.

type Shape interface {
Area() float64
}

Any type that has a method Area() float64 is a Shape.

23.

How do you implement an interface in Go?

Answer

There is no implements keyword. In Go, implementing an interface is based on duck typing, which means “if it looks like a duck and quacks like a duck, it’s a duck.” A type implements an interface simply by defining all the methods declared in that interface — no explicit declaration is needed.

type Circle struct {
Radius float64
}
func (c Circle) Area() float64 {
return 3.14 * c.Radius * c.Radius
}

var s Shape = Circle{5}
fmt.Println(s.Area())
24.

Can you have multiple receivers implementing the same interface?

Answer

Yes. Any number of types can implement the same interface as long as they define the methods required by that interface.

25.

What is the blank identifier _ in Go?

Answer

The blank identifier _ is used to ignore a value. For example, if a function returns two values and you only need one, you can discard the other using _:

val, _ := someFunction()
26.

What is the purpose of the init() function?

Answer

init() functions are called before main() executes, typically used for initialization logic like setting up package-level variables. There can be multiple init() functions in a package, and they execute in the order they appear.

27.

How do you run tests in Go?

Answer

Go uses the go test command for testing. Test files end with _test.go and contain functions named TestXxx(t *testing.T). Running go test in a package directory automatically discovers and runs these tests.

28.

Explain the Go module system.

Answer

Modules are the way Go manages dependencies. A go.mod file defines the module path and tracks dependency versions. go get manages retrieving and updating dependencies. Modules replaced the older GOPATH-based dependency management.

29.

What are goroutines and how are they different from OS threads?

Answer

Goroutines are lightweight, managed by the Go runtime, and multiplexed onto OS threads. They start with a small stack and grow as needed. Compared to OS threads, goroutines are more efficient in terms of memory and context-switch overhead, allowing millions of goroutines to run concurrently.

30.

What is the panic and recover mechanism in Go?

Answer
  • panic: Stops normal execution of the current function and begins unwinding the stack.
  • recover: Can catch a panic within a defer call, enabling you to handle the error and prevent the program from crashing.
func f() {
defer func() {
if r := recover(); r != nil {
fmt.Println("Recovered from panic:", r)
}
}()
panic("something went wrong")
}

Golang Interview Questions for Experienced Levels

1.

Explain how garbage collection works in Go.

Answer

Go’s garbage collector (GC) uses a concurrent, tri-color mark-and-sweep algorithm that runs in parallel with the application. It marks reachable objects, then sweeps away unreachable ones.

 

Modern versions of Go have low GC pauses and aim to achieve sub-millisecond pause times. The GC is tuned via environment variables (like GOGC) to control how often it runs based on heap growth.

2.

How would you optimize memory usage in a high-load Go application?

Answer

Techniques include:

  • Use efficient data structures: Replace large arrays or maps with more memory-efficient alternatives, or preallocate slices when possible.
  • Minimize garbage: Reuse buffers, use object pools (e.g., sync.Pool), and avoid unnecessary allocations.
  • Reduce pointer chasing: Store data contiguously and consider using value types instead of pointers where possible.
  • Profiling and analysis: Use go tool pprof and memory profiles to identify hotspots and leaks.
3.

What is a memory leak and how to detect it?

Answer

A memory leak occurs when a program allocates memory but fails to release it when it’s no longer needed. This happens when references to unused objects are unintentionally retained, preventing the memory from being reclaimed.

Over time, this can cause increased memory usage, degraded performance, or program crashes.

How to Detect a Memory Leak

  • Monitoring Memory Usage: Use tools to track memory usage over time. If memory usage keeps growing even when the workload is stable, it could indicate a memory leak.
  • Use memory profiling tools like pprof in Go to analyze heap allocations and identify objects that are unexpectedly retained. For example go tool pprof <binary> <heap-profile>, which shows which parts of the code are holding onto memory.
  • Enable garbage collection logs to check memory usage trends. In Go, use the GODEBUG=gctrace=1 environment variable.
  • Heap Dumps: Capture and analyze heap dumps to see which objects are still in memory and why they are not being garbage collected.
  • Static Analysis: Use tools like golangci-lint to catch potential issues in the code that could lead to leaks.
4.

How do you handle concurrency at scale in Go?

Answer
  • Goroutine patterns: Use worker pools to limit concurrency and avoid creating too many goroutines.
  • Channels and synchronization: Properly design channel usage to avoid deadlocks and use sync primitives like sync.Mutex, sync.RWMutex, or sync.Cond.
  • Context for cancellation: Pass context.Context to goroutines to signal cancellation, preventing resource leaks.
5.

What is a worker pool pattern and how do you implement it in Go?

Answer

A worker pool pattern uses a fixed number of goroutines (workers) to process jobs from a channel. This limits the maximum concurrency level and can prevent resource exhaustion.

Example:

jobs := make(chan int, 100)
results := make(chan int, 100)

// Worker function
func worker(jobs <-chan int, results chan<- int) {
for j := range jobs {
// process job j
results <- j * 2
}
}

// Create a fixed number of workers
for w := 1; w <= 5; w++ {
go worker(jobs, results)
}

// Feed jobs
for i := 1; i <= 10; i++ {
jobs <- i
}
close(jobs)

// Collect results
for i := 1; i <= 10; i++ {
<-results
}
6.

How do you debug performance issues in Go applications?

Answer
  • Profiling tools: Use pprof, go test -bench, go tool trace, and runtime/pprof to analyze CPU, memory, and block profiles.
  • Metrics and logging: Instrument code using expvar, Prometheus metrics, or logs to understand performance characteristics.
  • Benchmarking: Write benchmarks and run go test -bench . to isolate slow code paths.
7.

What is the context package used for?

Answer

context provides a way to manage deadlines, cancellation signals, and request-scoped data. It’s essential for controlling goroutines in long-running services, ensuring that resources are cleaned up when requests end or time out.

ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
8.

How do you implement rate limiting in Go?

Answer

You can use the standard library’s time.Ticker or time.After to implement token bucket algorithms, or use third-party packages like golang.org/x/time/rate.

Simple token bucket example:

limiter := rate.NewLimiter(2, 5) // 2 requests/second, burst of 5
if limiter.Allow() {
// handle request
} else {
// rate limit exceeded
}
9.

Explain how to use the sync package (Mutex, RWMutex, WaitGroup, Once).

Answer
  • Mutex/RWMutex: Protect shared resources by locking critical sections. sync.Mutex allows exclusive access, while sync.RWMutex allows multiple readers or one writer.
  • WaitGroup: Wait for a collection of goroutines to finish.
  • Once: Run a piece of code exactly once, often used for initialization.
data := make(map[string]int)
var mu sync.Mutex
var wg sync.WaitGroup


// Function to write to the map
writeToMap := func(key string, value int) {
defer wg.Done()
mu.Lock()
data[key] = value
mu.Unlock()
}


// Start goroutines
wg.Add(3)
go writeToMap("alpha", 1)
go writeToMap("beta", 2)
go writeToMap("gamma", 3)


wg.Wait()
fmt.Println("Final Map Data:", data)
10.

How do you prevent deadlocks?

Answer
  • Consistent locking order: Always lock shared resources in the same order.
  • Timeouts and cancellation: Use contexts or channels with timeouts.
  • Avoid holding locks for long: Minimize the time a lock is held.
  • Careful design: Use select statements and try to avoid circular waits.
11.

What are common memory pitfalls in Go and how to avoid them?

Answer
  • Slicing beyond capacity: Lead to runtime panics. Always check slice indices.
  • Memory leaks via goroutines: Not ending goroutines can cause memory leaks. Use contexts and ensure goroutines return.
  • Not closing channels: Can block senders forever. Ensure channels are closed when done.
12.

Explain interface internals and how dynamic dispatch works in Go.

Answer

Interfaces in Go store two pointers: one to the type’s method table (it’s called an “itab”) and one to the concrete data. Calls through the interface invoke methods via this lookup table, which results in dynamic dispatch at runtime.

13.

How do you ensure backward compatibility with Go modules?

Answer

Use semantic versioning. When making breaking changes, increment the major version and use the /v2, /v3, … suffix in the module path.

For example:

module github.com/user/repo/v2

This allows older code to continue using the old version without breaking.

14.

How do you do dependency injection in Go?

Answer

Go doesn’t have a built-in DI framework. You typically do dependency injection by passing interfaces or structs as parameters to constructors and functions. Third-party libraries or manual wiring in main() are common approaches.

type UserRepo interface {
GetUser(id string) User
}

func NewUserService(repo UserRepo) *UserService {
return &UserService{repo: repo}
}
15.

How do you implement a graceful shutdown in a Go server?

Answer

Use context and os.Signal channel to catch SIGINT or SIGTERM. When the signal is received, initiate a shutdown of the server with a timeout, allowing in-flight requests to finish.

srv := &http.Server{Addr: ":8080"}
go srv.ListenAndServe()

quit := make(chan os.Signal, 1)
signal.Notify(quit, os.Interrupt, syscall.SIGTERM)
<-quit
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

srv.Shutdown(ctx)

16.

How do you efficiently handle JSON encoding and decoding in Go, particularly in high-performance applications?

Answer
  1. Leverage encoding/json Features:
    • Use well-defined structs with appropriate field tags to align JSON keys with expected formats.
    • Use json.Decoder for processing large JSON payloads incrementally, avoiding the need to load the entire payload into memory.
  2. Reduce Memory Allocations:
    • Reuse buffers with bytes.Buffer or sync.Pool to minimize memory allocations for repeated marshaling/unmarshaling operations.
    • Preallocate slices or maps when the size is predictable to avoid dynamic resizing.
  3. Optimize Encoding and Decoding:
    • Implement custom Marshaler and Unmarshaler interfaces when the default behavior isn’t efficient for the application’s requirements.
    • Use optimized third-party libraries like json-iterator or easyjson for faster serialization and deserialization.
  4. Profile and Benchmark:
    • Regularly benchmark JSON processing with testing and pprof to identify and resolve performance bottlenecks.
    • Avoid excessive use of reflection, as it increases processing time and memory usage.
  5. Streamline Error Handling:
    • Perform strict input validation during decoding to catch malformed data early.
    • Log errors with sufficient context to make debugging easier without impacting application performance.

By applying these strategies, JSON handling can be both performant and reliable, even in high-load scenarios.

17.

Can you explain the difference between json.Marshal and

json.MarshalIndent in Go?

In what scenarios would you use each, and why?

Answer

The key difference between json.Marshal and json.MarshalIndent lies in their output format:

  • json.Marshal: Produces a compact JSON representation without any additional whitespace. It is ideal for scenarios where minimizing the size of the JSON payload is critical, such as when sending data over a network or storing it in a database.
  • json.MarshalIndent: Produces a human-readable, indented JSON representation by adding whitespace and line breaks. This is useful for debugging, logging, or generating JSON files that need to be easily readable by humans.

Scenarios for Using json.Marshal:

  • High-performance APIs where reducing payload size improves speed and efficiency.
  • Compact storage requirements, such as in databases or cache systems.

Scenarios for Using json.MarshalIndent:

  • Logging or debugging to make JSON outputs easier to understand.
  • Generating configuration files or API documentation that requires readable JSON formatting.

By choosing the appropriate function based on the context, you can optimize for either performance or readability.

18.

How do you handle large file processing in Go?

Answer
  • Streaming IO: Use bufio.Reader or io.Reader interfaces to read data in chunks instead of loading it all into memory.
  • Goroutines and pipelines: Process chunks concurrently.
  • Memory mapping (mmap): Use golang.org/x/exp/mmap if appropriate, but be mindful of system limitations.
19.

Explain how to use build tags in Go.

Answer

Build tags allow conditional compilation. They are placed as comments at the top of a file:

// +build linux

package main

This file only builds on Linux. With Go modules, build tags control which files are included for different platforms or conditions.

20.

How do you integrate C/C++ code with Go?

Answer

Use cgo. Add import “C” and comment directives for including C headers. Cgo allows calling C functions from Go code. This should be used sparingly due to complexity and performance overhead.

21.

What is escape analysis and how does it affect performance?

Answer

Escape analysis determines whether variables can be allocated on the stack or must “escape” to the heap. If a variable’s lifetime extends beyond the function, or it’s referenced by a closure, it escapes. Minimizing escapes reduces garbage collection overhead and improves performance.

Use go build -gcflags="-m" to see compiler escape analysis logs.

22.

How do you manage configuration in a Go application?

Answer
  • Environment variables: Use os.Getenv to read configuration from the environment.
  • Flags: Use the flag package for command-line parameters.
  • Configuration files (JSON, YAML): Parse with encoding/json, gopkg.in/yaml.v2, or other libraries.
  • Viper: A popular library for managing configurations from multiple sources.
23.

Describe the go toolchain commands

(go build, go run, go test, go get).

Answer
  • go build: Compiles the packages and dependencies into an executable or object files.
  • go run: Compiles and runs the program immediately.
  • go test: Runs tests defined in _test.go files.
  • go get: Manages and installs packages and modules from remote repositories (in module mode, now go get and go install have slightly different roles).
24.

How do you use time.Duration and manage timeouts in Go?

Answer

time.Duration represents the elapsed time. You can use time.Sleep, context.WithTimeout, or select with a time.After channel to manage timeouts.

ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
25.

How to handle partial failures in distributed systems using Go? How can I check if two slices are equal?

Answer

To check if two slices are equal in Go, you need to consider their length, order, and elements. Here’s a concise and reliable way to compare slices:

import "reflect"

func areSlicesEqual(slice1, slice2 []int) bool {
return reflect.DeepEqual(slice1, slice2)
}

Explanation:

  • reflect.DeepEqual checks if the slices are of the same length and contain the same elements in the same order. This method works for slices of any type.

Manual Comparison (for better performance):

For simple types like int or string, you can manually compare the slices for improved performance:

func areSlicesEqual(slice1, slice2 []int) bool {
if len(slice1) != len(slice2) {
return false
}
for i := range slice1 {
if slice1[i] != slice2[i] {
return false
}
}
return true
}

Key Points to Consider:

  • Ensure the slices are sorted if the order does not matter. Use sort.Slice before comparing.
  • For large slices, consider optimizations or hashing techniques if performance is critical.

By using the appropriate method, you can reliably check for equality based on your specific use case

26.

How do you manage microservices communication in Go?

Answer
  • HTTP/REST: Use net/http or frameworks.
  • gRPC: Use google.golang.org/grpc for binary RPC.
  • Message queues: Integrate with Kafka, NATS, or RabbitMQ using client libraries.
  • Service discovery: Use Consul, etcd, or Kubernetes DNS for dynamic service locations.
27.

How do you handle logging in production?

Answer

Use structured logging libraries like zap or logrus. Add correlation IDs, timestamps, and severity levels. Log in JSON for easier parsing by log management systems. Ensure logs don’t overwhelm the system and consider log rotation.

28.

How to safely publish on GitHub a Go module with private dependencies?

Answer
  • Use Go modules with private repositories:
    • Configure GOPRIVATE environment variable.
    • Use SSH keys or token-based auth for private repos.
    • Don’t commit credentials.
  • Document how others can access the private dependencies if they have the right credentials.
29.

Explain the difference between interface{} and generics in Go.

Answer
  • interface{} (empty interface): Accepts any type, but you lose type safety and must use type assertions or reflection.
  • Generics (Go 1.18+): Allow defining functions and data structures that can work with any type that satisfies certain constraints, providing type safety at compile time and reducing boilerplate code.
30.

How do you write benchmarks in Go?

Answer

Benchmarks are functions named BenchmarkXxx(b *testing.B) in _test.go files. Inside them, run the code you want to measure in a loop b.N times.

Run go test -bench=. to execute benchmarks.

func BenchmarkSum(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = 1 + 2
}
}
31.

What’s new in recent Go releases that improves developer productivity?

Answer
  • Go generics: Introduced in Go 1.18, enabling reusable data structures and algorithms.
  • Faster compilation and smaller binaries: Each Go release refines compiler optimizations.
  • Improved standard library: Better support for fuzz testing, new APIs.
  • Tooling enhancements: gopls improvements, better go work for workspace management.

Coding tasks suitable for Golang interviews

1.

Implement a Merge Sort Function

Answer

Implement a merge sort function that takes a slice of integers and returns a new sorted slice using the merge sort algorithm.

package main

func mergeSort(nums []int) []int {
// Your code here
return nil
}

Requirements

  1. Input
    • A slice of integers, e.g., []int{5, 2, 9, 1}.
  2. Output
    • A new slice that contains the integers in sorted order.
  3. Algorithm Details
    • Divide and Conquer:
      • Recursively divide the input slice into two halves until each half contains one or no elements.
      • Merge the sorted halves to produce the final sorted slice.
    • The merging step should ensure elements are in ascending order.
  4. Edge Cases
    • Empty Slice: Return an empty slice.
    • Single Element: Return the same slice.
    • Already Sorted: Ensure the output is identical to the input.

Example Usage

package main

import (
"fmt"
)

func main() {
nums := []int{5, 2, 9, 1, 6, 3}
sorted := mergeSort(nums)
fmt.Println(sorted) // [1, 2, 3, 5, 6, 9]
}

Hints

  • Use a helper function to perform the merge step, where two sorted slices are combined into one sorted slice.
  • The base case for the recursion is when the input slice has one or no elements.
  • Ensure that your implementation works for slices of varying lengths, including empty slices.
2.

LRU Cache

Answer

Implement an LRU (Least Recently Used) cache that is safe for concurrent use. The cache should support the following operations:

  1. Get(key):
    • Retrieve the value associated with the key.
    • Return false if the key is not present in the cache.
  2. Put(key, value):
    • Insert a key-value pair into the cache.
    • If the cache reaches its capacity, evict the least recently used item.

Requirements

  • The cache should be thread-safe and allow concurrent access.
  • Handle scenarios where the cache capacity is exceeded by evicting the least recently used item.

Function Signatures

package main

type LRUCache struct {
// Your code here
}

func NewLRUCache(cap int) *LRUCache {
// Your code here
return nil
}

func (c *LRUCache) Get(key string) (string, bool) {
// Your code here
return "", false
}

func (c *LRUCache) Put(key, value string) {
// Your code here
}

Example Usage

package main

import "fmt"

func main() {
cache := NewLRUCache(2)

cache.Put("a", "1")
cache.Put("b", "2")

fmt.Println(cache.Get("a")) // Output: "1", true

cache.Put("c", "3") // Evicts key "b"

fmt.Println(cache.Get("b")) // Output: "", false
fmt.Println(cache.Get("c")) // Output: "3", true

cache.Put("d", "4") // Evicts key "a"

fmt.Println(cache.Get("a")) // Output: "", false
fmt.Println(cache.Get("d")) // Output: "4", true

}

Notes

  • Use a doubly linked list to maintain the order of usage.
  • Use a hash map for fast access to nodes in the linked list.
  • Ensure thread safety by protecting shared data structures with a synchronization mechanism like sync.Mutex.
3.

Worker Pool

Answer

Implement a worker pool that processes jobs from a channel using a fixed number of workers. Each worker should:

  1. Read jobs from a jobs channel.
  2. Process the job.
  3. Send the results to a results channel.

Function Signature

package main


func startWorkerPool(numWorkers int, jobs <-chan int, results chan<- int) {
// Your code here
}

Requirements

  1. Input:
    • numWorkers: The number of workers to spawn.
    • jobs: A channel containing jobs (integers) to be processed.
  2. Output:
    • results: A channel where the results of the job processing are sent.
  3. Behavior:
    • The worker pool should start exactly numWorkers goroutines.
    • Each worker should read from the jobs channel, process the job, and send the result to the results channel.
    • When the jobs channel is closed, all workers should stop processing.
    • Close the results channel once all jobs are processed.

Example Usage

package main


import "fmt"


func main() {
jobs := make(chan int, 5)
results := make(chan int, 5)


// Add jobs to the jobs channel
for i := 1; i <= 5; i++ {
jobs <- i
}
close(jobs)


// Start the worker pool
startWorkerPool(3, jobs, results)


// Print results from the results channel
for result := range results {
fmt.Println(result)
}
}

Notes

  • Each job can represent a task like squaring a number, performing a calculation, or any computational work.
  • Ensure all workers terminate gracefully when there are no more jobs to process.
  • The results channel should be closed by the worker pool once all results are sent.

Hints

  • Use a sync.WaitGroup to manage the lifecycle of worker goroutines.
  • Ensure the results channel is only closed after all workers finish processing.
  • Each worker should operate in its own goroutine to process jobs concurrently.
4.

Token Bucket Rate Limiter

Answer

Task Description

Implement a token bucket rate limiter that allows a fixed number of requests per second. The rate limiter should:

  1. Allow a specified number of requests (rate) per second.
  2. Provide a method Allow() bool that:
    • Returns true if a request is allowed (a token is available).
    • Returns false if no tokens are available.

Function Signatures

package main

type RateLimiter struct {
// Your code here
}

func NewRateLimiter(rate int) *RateLimiter {
// Your code here
return nil
}

func (r *RateLimiter) Allow() bool {
// Your code here
return false
}

Requirements

  1. Token Bucket Behavior:
    • The rate limiter should start with a full bucket of tokens (equal to rate).
    • Each second, the bucket should be refilled with tokens up to the maximum capacity (rate).
    • Requests can only proceed if tokens are available; each request consumes one token.
  2. Concurrency:
    • Ensure thread-safe operation for concurrent calls to Allow().
  3. Edge Cases:
    • Handle scenarios where no requests are made for an extended period (tokens should still refill).
    • Ensure tokens do not exceed the maximum capacity (rate).

Example Usage

package main

import (
"fmt"
"time"
)

func main() {
rl := NewRateLimiter(2)

// Simulate requests
for i := 0; i < 5; i++ {
fmt.Println(rl.Allow()) // Prints true for the first two requests, then false
time.Sleep(300 * time.Millisecond)
}

time.Sleep(1 * time.Second) // Wait for tokens to refill

fmt.Println(rl.Allow()) // Prints true after refill
fmt.Println(rl.Allow()) // Prints true
fmt.Println(rl.Allow()) // Prints false (exhausted tokens)

close(rl.stopChan) // Stop the refiller goroutine
}

Notes

  • Use a ticker to periodically refill the tokens.
  • Protect shared data (e.g., token count) with a synchronization mechanism like
  • sync.Mutex.
  • Ensure proper cleanup of any goroutines when the rate limiter is no longer in use.

Hints

  • Use a sync.Mutex to synchronize access to the token count.
  • Create a separate goroutine to handle token refills at fixed intervals.
  • Use a chan struct{} or a similar mechanism to stop the refill goroutine gracefully.
5.

Concurrency-Safe Memoization

Answer

Task Description

Implement a concurrency-safe memoization function. Given a function 

f(key string) string, return a new function that:

  1. Caches results for each key, so subsequent calls with the same key return the cached result.
  2. Ensures thread safety for concurrent calls to the memoized function.

Function Signature

package main

func Memoize(f func(string) string) func(string) string {
// Your code here
return nil
}

Requirements

  1. Caching Behavior:
    • The returned function should store results for previously computed keys.
    • If a key is already cached, return the cached result instead of recomputing.
  2. Concurrency:
    • Ensure the memoization is thread-safe, allowing multiple goroutines to access the memoized function simultaneously without issues.
  3. Correctness:
    • Handle multiple unique keys, ensuring each key is computed and cached correctly.
    • Avoid redundant calls to the underlying function f for cached keys.

Example Usage

package main

import (
"fmt"
)

func main() {
slowFunc := func(s string) string {
// Simulate slow computation
return s + "-computed"
}

memoized := Memoize(slowFunc)

// First call computes and caches the result
fmt.Println(memoized("test")) // Output: "test-computed"

// Second call retrieves the cached result
fmt.Println(memoized("test")) // Output: "test-computed"
}

Notes

  • Use a mutex (e.g., sync.Mutex) to synchronize access to the cache.
  • Store cached results in a map[string]string.
  • Ensure that the function behaves correctly for both single-threaded and multi-threaded scenarios.

Hints

  • Before computing the result for a key, check if it exists in the cache.
  • Lock the cache for reading and writing to ensure thread safety.
  • Avoid holding the lock while executing the underlying function f to prevent deadlocks or reduced concurrency.
6.

Binary Search Tree (BST)

Answer

Task Description

Implement a binary search tree with the following methods:

  1. Insert(val int):
    • Inserts a value into the binary search tree.
    • Ensures the tree maintains the binary search property:
      • Values in the left subtree are less than the current node.
      • Values in the right subtree are greater than the current node.
      • Duplicate values should not be added.
  2. Search(val int) bool:
    • Searches for a value in the binary search tree.
    • Returns true if the value is found, otherwise returns false.

Function Signatures

package main


type BST struct {
val   int
left  *BST
right *BST
}


func (b *BST) Insert(val int) {
// Your code here
}


func (b *BST) Search(val int) bool {
// Your code here
return false
}

Requirements

  1. Insert Method:
    • Adds the value in the correct position based on the binary search property.
    • Avoids adding duplicate values.
  2. Search Method:
    • Recursively checks the left or right subtree depending on the value being searched.
    • Returns false if a nil node is reached, indicating the value is not in the tree.
  3. Edge Cases:
    • Inserting into an empty tree.
    • Searching in an empty tree.
    • Duplicate insertions should not modify the tree.

Example Usage

package main


import "fmt"


func main() {
var tree BST


// Insert values into the BST
tree.Insert(5)
tree.Insert(3)
tree.Insert(7)


// Search for values
fmt.Println(tree.Search(3))  // Output: true
fmt.Println(tree.Search(10)) // Output: false


// Insert more values
tree.Insert(4)
tree.Insert(6)


// Validate structure manually or via tests
}

Notes

  • Ensure the binary search property is maintained after every insertion.
  • Avoid duplicate nodes when inserting a value that already exists in the tree.
  • Use recursion to simplify the implementation of both Insert and Search.

Hints

  • Insert Method:
    • Check if the value is less than the current node’s value to decide whether to insert into the left subtree.
    • Check if the value is greater to decide whether to insert into the right subtree.
    • If the node is nil, create a new node with the given value.
  • Search Method:
    • Use recursion to traverse the tree.
    • Return true when the current node’s value matches the search value.
    • Return false if a nil node is reached.
7.

Min-Heap

Answer

Task Description

Implement a min-heap with the following methods:

  1. Insert(val int):
    • Adds an integer to the heap while maintaining the heap property:
      • The smallest element is always at the root.
      • Each parent node is smaller than its child nodes.
  2. ExtractMin() (int, bool):
    • Removes and returns the smallest element (root) from the heap.
    • Returns false if the heap is empty.

Function Signatures

package main


type MinHeap struct {
// Your code here
}


func (h *MinHeap) Insert(val int) {
// Your code here
}


func (h *MinHeap) ExtractMin() (int, bool) {
// Your code here
return 0, false
}

Requirements

  1. Heap Property:
    • The heap should maintain the min-heap property:
      • The smallest element is always at the root (h.data[0]).
      • The children of index i are at indices 2i+1 and 2i+2.
      • The parent of index i is at (i-1)/2.
  2. Insert Method:
    • Add the new element to the end of the heap.
    • Restore the heap property by “bubbling up” the element.
  3. ExtractMin Method:
    • Swap the root with the last element, then remove the last element.
    • Restore the heap property by “bubbling down” the root element.
  4. Edge Cases:
    • Handle extraction from an empty heap.
    • Ensure the heap property is maintained after multiple insertions and extractions.

Example Usage

package main


import "fmt"


func main() {
h := &MinHeap{}


h.Insert(5)
h.Insert(3)
h.Insert(8)


fmt.Println(h.ExtractMin()) // Output: 3, true
fmt.Println(h.ExtractMin()) // Output: 5, true
fmt.Println(h.ExtractMin()) // Output: 8, true
fmt.Println(h.ExtractMin()) // Output: 0, false (heap is empty)


}

Notes

  • Use a slice ([]int) as the underlying data structure for the heap.
  • Implement helper functions to “bubble up” and “bubble down” elements as needed.
  • Ensure that all operations maintain the min-heap property efficiently.

Hints

  • Bubble Up:
    • Starting from the last inserted element, compare it with its parent.
    • Swap the element with its parent if it is smaller, and continue until the heap property is restored.
  • Bubble Down:
    • Starting from the root, compare it with its children.
    • Swap it with the smaller child if it violates the heap property, and continue until the property is restored.
  • Index Calculations:
    • Parent index: (i-1)/2
    • Left child index: 2i+1
    • Right child index: 2i+2
8.

Concurrency-Safe Map

Answer

Task Description

Implement a concurrency-safe map with the following methods:

  1. Get(key string) (string, bool):
    • Retrieves the value associated with the key.
    • Returns false if the key does not exist.
  2. Set(key, value string):
    • Stores the value associated with the key.
    • Overwrites the value if the key already exists.
  3. Delete(key string):
    • Removes the key-value pair from the map.
    • If the key does not exist, the method should handle gracefully.

Function Signatures

package main


type ConcurrentMap struct {
// Your code here
}


func NewConcurrentMap() *ConcurrentMap {
// Your code here
return nil
}


func (m *ConcurrentMap) Get(key string) (string, bool) {
// Your code here
return "", false
}


func (m *ConcurrentMap) Set(key, value string) {
// Your code here
}


func (m *ConcurrentMap) Delete(key string) {
// Your code here
}

Requirements

  1. Concurrency Safety:
    • Use a sync.RWMutex to allow multiple readers or a single writer at a time.
    • Ensure that Set and Delete operations are exclusive and block other operations.
  2. Efficiency:
    • Use RLock for read operations to allow multiple readers concurrently.
    • Use Lock for write operations to ensure thread safety.
  3. Edge Cases:
    • Handle retrieval of non-existent keys gracefully.
    • Ensure deletion of non-existent keys does not cause errors.

Example Usage

package main


import "fmt"


func main() {
cm := NewConcurrentMap()


cm.Set("foo", "bar")
val, ok := cm.Get("foo")
if ok {
fmt.Println("Value for 'foo':", val) // Output: Value for 'foo': bar
}


cm.Delete("foo")
_, ok = cm.Get("foo")
if !ok {
fmt.Println("'foo' was deleted.") // Output: 'foo' was deleted.
}


}

Notes

  • Use a map[string]string as the underlying storage for key-value pairs.
  • Protect all operations on the map with a sync.RWMutex to ensure thread safety.
  • The Delete method should handle keys that do not exist gracefully.

Hints

  • Get Method:
    • Use RLock to allow multiple concurrent readers.
    • Ensure you release the lock after retrieving the value.
  • Set Method:
    • Use Lock to prevent other readers or writers from accessing the map while updating it.
  • Delete Method:
    • Use Lock to ensure exclusive access while removing a key.
  • Initialization:
    • Provide a constructor NewConcurrentMap to initialize the map and mutex properly.
9.

Pipeline with Generator and Filter

Answer

Task Description

Implement a pipeline that consists of the following stages:

  1. Generator:
    • Sends a sequence of integers into a channel.
    • Operates within its own goroutine.
  2. Filter:
    • Reads integers from an input channel and filters them based on a condition (e.g., keep only even numbers).
    • Operates within its own goroutine and sends the filtered results into an output channel.
  3. Consumer:
    • Reads integers from the filtered output channel.

These stages should be composed using channels to demonstrate a pipeline pattern in concurrent Go programming.

Function Signatures

package main


func generator(nums ...int) <-chan int {
// Your code here
return nil
}


func filterEven(in <-chan int) <-chan int {
// Your code here
return nil
}

Requirements

  1. Generator:
    • Accepts a variadic number of integers (nums ...int).
    • Sends each integer into a channel, then closes the channel.
  2. Filter:
    • Accepts an input channel (<-chan int).
    • Sends only even numbers into an output channel, then closes the channel.
  3. Concurrency:
    • Each stage (generator and filter) must run in its own goroutine.
    • Ensure channels are properly closed when processing is complete.
  4. Composition:
    • Combine these stages into a pipeline where the generator feeds the filter, and the filter feeds the consumer.

Example Usage

package main


import "fmt"


func main() {
// Create a pipeline
gen := generator(1, 2, 3, 4, 5, 6)
even := filterEven(gen)


// Consume the filtered output
for num := range even {
fmt.Println(num) // Output: 2, 4, 6
}


}

Notes

  • Use channels to connect the stages of the pipeline.
  • Each stage should close its output channel when done.
  • Ensure the pipeline can handle empty input gracefully.

Hints

  • Generator:
    • Create a channel and launch a goroutine to send values into it.
    • Close the channel when all values are sent.
  • Filter:
    • Create an output channel and launch a goroutine to process values from the input channel.
    • Close the output channel after all input values are processed.
  • Channel Communication:
    • Use range to iterate over values from a channel.
    • Ensure channels are closed to signal completion to downstream stages.
10.

Cycle Detection in Singly Linked List

Answer

Task Description

Implement a function to detect if a singly linked list has a cycle. A cycle exists if a node’s

Next pointer points back to a previous node, creating a loop.

Function Signature

package main


type ListNode struct {
Val  int
Next *ListNode
}


func hasCycle(head *ListNode) bool {
// Your code here
return false
}

Requirements

  1. Input:
    • head: The head of a singly linked list (ListNode).
  2. Output:
    • Returns true if a cycle is detected, otherwise false.
  3. Behavior:
    • Use Floyd’s Cycle Detection Algorithm (Tortoise and Hare):
      • Use two pointers: slow moves one step at a time, and fast moves two steps.
      • If slow and fast meet, a cycle exists.
      • If fast or fast.Next becomes nil, the list does not have a cycle.

Example Usage

package main


import "fmt"


func main() {
n1 := &ListNode{Val: 1}
n2 := &ListNode{Val: 2}
n3 := &ListNode{Val: 3}
n1.Next = n2
n2.Next = n3


// Test without cycle
fmt.Println(hasCycle(n1)) // Output: false


// Create a cycle
n3.Next = n1
fmt.Println(hasCycle(n1)) // Output: true


}

Notes

  • Cycle:
    • A cycle exists if a node points back to an earlier node in the list.
    • Example: 1 -> 2 -> 3 -> 4 -> 2 (cycle starts at node with value 2).
  • Algorithm Details:
    • Use two pointers (slow and fast) to traverse the list.
    • If slow equals fast, a cycle is detected.
    • If the list ends (fast == nil or fast.Next == nil), there is no cycle.

Edge Cases

  1. An empty list (head == nil).
  2. A single node with no cycle.
  3. A single node pointing to itself (cycle).
  4. Multiple nodes with and without cycles.
  5. Cycles at various positions in the list (beginning, middle, or end).

Hints

  • Detecting Cycle:
    • Use a two-pointer technique to traverse the list at different speeds.
    • If they meet, the list contains a cycle.
  • Avoid Infinite Loops:
    • Use the fast pointer’s ability to skip nodes to terminate the loop if no cycle exists.
11.

Tarjan’s Algorithm for Strongly Connected Components (SCCs)

Answer

Task Description

Implement Tarjan’s Algorithm to find all strongly connected components (SCCs) in a directed graph. The graph is represented as an adjacency list: map[int][]int.

Function Signature

package main


func tarjanSCC(graph map[int][]int) [][]int {
// Your code here
return nil
}

Requirements

  1. Input:
    • graph: A directed graph represented as an adjacency list where the keys are node IDs and the values are slices of neighboring nodes.
  2. Output:
    • A slice of slices, where each inner slice represents an SCC containing node IDs.
    • SCCs can be in any order, but the nodes within each SCC must be grouped together.
  3. Behavior:
    • Use Tarjan’s Algorithm to find SCCs:
      • Perform Depth-First Search (DFS).
      • Maintain index and low-link values for each node.
      • Use a stack to track nodes currently in the DFS stack.
      • Identify SCCs when the low-link value equals the index value for a node.
  4. Complexity:
    • Time Complexity: O(V + E), where V is the number of nodes and E is the number of edges.

Example Usage

package main


import "fmt"


func main() {
graph := map[int][]int{
1: {2},
2: {3},
3: {1, 4},
4: {5},
5: {6, 7},
6: {4},
7: {},
}


fmt.Println(tarjanSCC(graph))
// Output (order of SCCs may vary): [[1, 3, 2], [4, 6], [5], [7]]


}

Notes

  • Strongly Connected Components:
    • An SCC is a maximal subgraph where every node is reachable from every other node in the subgraph.
    • Example:
      • For the graph 1 -> 2 -> 3 -> 1, the SCC is [1, 2, 3].
  • Algorithm Details:
    • Maintain:
      • index: Tracks the order of discovery for each node.
      • low-link: The smallest index reachable from the node.
    • Use a stack to manage nodes in the current DFS path.

Edge Cases

  1. An empty graph (graph == nil).
  2. A graph with no edges (graph[node] == [] for all nodes).
  3. Disconnected graphs.
  4. Cycles of varying sizes.
  5. Single-node SCCs (e.g., nodes with no edges).

Hints

  • Initialization:
    • Use maps to track index and low-link values for nodes.
    • Use a boolean map to track whether a node is currently in the DFS stack.
  • Algorithm Steps:
    1. Start DFS from an unvisited node.
    2. Set its index and low-link values to the current index.
    3. Push the node onto the stack and mark it as in the stack.
    4. For each neighbor:
      • If unvisited, recursively call DFS and update the node’s low-link value.
      • If in the stack, update the low-link value based on the neighbor’s index.
    5. If low-link[node] == index[node], extract an SCC by popping nodes from the stack until the current node is reached.
  • Output:
    • Collect SCCs during the DFS process and return them at the end.
12.

Dijkstra’s Algorithm for Shortest Paths

Answer

Task Description

Implement Dijkstra’s algorithm to find the shortest distances from a given start node to all other nodes in a weighted, directed graph. The graph is represented as an adjacency list of the form map[int][]struct{to, weight int}.

Function Signature

package main


func dijkstra(graph map[int][]struct{ to, weight int }, start int) map[int]int {
// Your code here
return nil
}

Requirements

  1. Input:
    • graph: A directed graph represented as an adjacency list where:
      • Keys are node IDs.
      • Values are slices of structs containing:
        • to: The destination node.
        • weight: The weight of the edge.
    • start: The node from which to calculate shortest distances.
  2. Output:
    • A map where:
      • Keys are node IDs.
      • Values are the shortest distances from the start node to the respective node.
    • If a node is not reachable from the start, its distance should be math.MaxInt.
  3. Behavior:
    • Use a priority queue (min-heap) to always process the node with the smallest known distance.
    • Relax edges to update the shortest known distances.
    • Return the computed shortest distances for all nodes.
  4. Complexity:
    • Time Complexity: O((V + E) log V), where V is the number of nodes and E is the number of edges.

Example Usage

package main


import "fmt"


func main() {
graph := map[int][]struct{ to, weight int }{
0: {{to: 1, weight: 4}, {to: 2, weight: 1}},
1: {{to: 3, weight: 1}},
2: {{to: 1, weight: 2}, {to: 3, weight: 5}},
3: {},
}
start := 0
dist := dijkstra(graph, start)


fmt.Println(dist)
// Output:
// map[0:0 1:3 2:1 3:4]


}

Notes

  • Algorithm Details:
    • Use a priority queue to manage nodes and their current shortest distances.
    • Initialize all distances to math.MaxInt (infinity), except the start node which is set to 0.
    • Process each node:
      • For each neighbor, calculate the distance through the current node.
      • If the calculated distance is shorter than the known distance, update it and push the neighbor into the priority queue.
  • Edge Cases:
    • A graph with no nodes or edges.
    • Disconnected graphs where some nodes are unreachable.
    • A single node graph.

Hints

  • Priority Queue:
    • Use a min-heap (container/heap) to efficiently retrieve the next node with the smallest distance.
  • Relaxation:
    • For each edge (u, v) with weight w:
      • If dist[u] + w < dist[v], update dist[v] to dist[u] + w.
  • Initialization:
    • Set the distance of all nodes to math.MaxInt.
    • Set the distance of the start node to 0.
13.

Topological Sort for Directed Acyclic Graph (DAG)

Answer

Task Description

Implement a function to perform topological sort on a directed acyclic graph (DAG). The function should return one possible ordering of nodes such that for every directed edge (u, v), node u appears before node v in the ordering.

Function Signature

package main


func topologicalSort(graph map[int][]int) []int {
// Your code here
return nil
}

Requirements

  1. Input:
    • graph: A directed acyclic graph represented as an adjacency list:
      • Keys are node IDs.
      • Values are slices of integers representing neighboring nodes.
  2. Output:
    • A slice of integers representing one valid topological ordering of the nodes.
  3. Behavior:
    • Compute the in-degree (number of incoming edges) for each node.
    • Initialize a queue with all nodes that have an in-degree of 0.
    • Process nodes in the queue:
      • Append the node to the order.
      • Decrement the in-degree of its neighbors.
      • Add neighbors with an in-degree of 0 to the queue.
    • Return the order once all nodes are processed.
  4. Complexity:
    • Time Complexity: O(V + E), where V is the number of nodes and E is the number of edges.

Example Usage

package main


import "fmt"


func main() {
graph := map[int][]int{
0: {1, 2},
1: {3},
2: {3},
3: {},
}


order := topologicalSort(graph)
fmt.Println(order)
// Output: [0, 1, 2, 3] or [0, 2, 1, 3] (any valid topological order)


}

Notes

  • Topological Sort:
    • Only applicable to directed acyclic graphs (DAGs).
    • A valid ordering ensures that for every edge (u, v)u appears before v.
  • Algorithm Details:
    • Compute in-degrees for all nodes.
    • Use a queue to process nodes with in-degree 0.
    • Append nodes to the order and update in-degrees of their neighbors.

Edge Cases

  1. Empty Graph:
    • Input graph has no nodes (graph == nil).
    • Output should be an empty slice.
  2. Disconnected Graph:
    • Graph with multiple nodes but no edges.
    • Any permutation of nodes is a valid topological order.
  3. Single Node:
    • Graph contains a single node with no edges.
    • The order should contain just that node.
  4. Multiple Valid Orders:
    • Graph with multiple valid topological orders.
    • The function can return any valid order.

Hints

  • In-degree Calculation:
    • Traverse the adjacency list to count the incoming edges for each node.
  • Queue Initialization:
    • Add nodes with an in-degree of 0 to the queue at the start.
  • Order Construction:
    • Use a queue to process nodes in in-degree order.
    • For each node, decrement the in-degree of its neighbors and enqueue any neighbor whose in-degree becomes 0.

14.

Pub/Sub System

Answer

Task Description

Implement a simple Publish/Subscribe (Pub/Sub) system with the following features:

  1. Subscribers can register to receive messages by subscribing.
  2. Publishers can send messages, which are delivered to all subscribers.
  3. The system should support concurrent publishers and subscribers safely.

Function Signatures

package main


type PubSub struct {
// Your code here
}


func NewPubSub() *PubSub {
// Your code here
return nil
}


func (ps *PubSub) Subscribe() <-chan string {
// Your code here
return nil
}


func (ps *PubSub) Publish(msg string) {
// Your code here
}

Requirements

  1. Concurrency Safety:
    • Ensure thread-safe access to the list of subscribers.
    • Use synchronization primitives such as sync.RWMutex.
  2. Subscriber Channels:
    • Each subscriber receives messages on their own channel.
    • Use buffered channels to avoid deadlocks during publishing.
  3. Publishing:
    • Deliver each message to all current subscribers.
    • Subscribers added after a message is published should not receive that message.
  4. Scalability:
    • Support multiple concurrent publishers and subscribers efficiently.

Example Usage

package main


import "fmt"


func main() {
ps := NewPubSub()


sub1 := ps.Subscribe()
sub2 := ps.Subscribe()


go func() {
ps.Publish("hello")
}()


fmt.Println(<-sub1) // Output: "hello"
fmt.Println(<-sub2) // Output: "hello"


}

Notes

  • Subscription:
    • A new subscriber is added by creating a new channel and appending it to the list of subscribers.
  • Publishing:
    • On publishing a message, iterate over the list of subscribers and send the message to each subscriber’s channel.
  • Channel Management:
    • Ensure proper handling of channels (e.g., closing channels when necessary).

Edge Cases

  1. Publishing messages when there are no subscribers.
  2. Handling subscribers that are added after messages have been published.
  3. Ensuring no subscriber misses a message when multiple publishers publish concurrently.
  4. Subscribers unsubscribing or being garbage collected (if implemented).

Hints

  • Mutex Usage:
    • Use sync.RWMutex for protecting the subscriber list:
      • RLock for read-only access (e.g., publishing).
      • Lock for modifying the list (e.g., subscribing).
  • Channel Buffering:
    • Use buffered channels for subscriber channels to prevent deadlocks if a subscriber is slow to consume messages.
  • Iterating Over Subscribers:
    • Ensure thread safety when iterating over the list of subscribers during publishing.
15.

Parallel File Processing

Answer

Task Description

Implement a function that processes a list of files in parallel using a specified number of worker goroutines. Each worker processes files concurrently by calling a dummy processing function on each file.

Function Signature

package main


func processFiles(files []string, workers int) {
// Your code here
}

Requirements

  1. Input:
    • files: A slice of filenames (strings) to be processed.
    • workers: The number of worker goroutines to use for concurrent processing.
  2. Behavior:
    • Process all files in the files slice by passing each to a predefined dummy function (testProcessFunc).
    • Distribute the workload across the specified number of worker goroutines.
  3. Concurrency:
    • Use a worker pool pattern:
      1. Create a channel to pass filenames to workers.
      2. Spawn the specified number of workers.
      3. Ensure all files are processed and all workers exit gracefully.
  4. Edge Cases:
    • If files is empty, the function should do nothing.
    • If workers is 0 or negative, default to using 1 worker.

Example Usage

package main


import "fmt"


func main() {
files := []string{"file1", "file2", "file3"}


// Example dummy function
testProcessFunc := func(file string) {
fmt.Println("Processing:", file)
}


// Process files using 3 workers
processFiles(files, 3)


}

Notes

  • Worker Pool Pattern:
    • Create a fixed number of workers, each consuming tasks from a shared channel.
    • Close the channel after sending all files to signal workers to stop.
  • Thread Safety:
    • If maintaining shared state (e.g., tracking processed files), use synchronization primitives like sync.Mutex.
  • Default Behavior:
    • If workers is invalid (e.g., ≤0), default to a single worker.

Edge Cases

  1. Empty files slice (files == nil or len(files) == 0).
  2. Negative or zero workers.
  3. Very large number of files.
  4. Long-running or blocking dummy processing function.

Hints

  • Channel Usage:
    • Use a channel to pass filenames to workers.
    • Close the channel after sending all filenames.
  • Worker Loop:
    • Each worker should:
      1. Continuously read from the channel.
      2. Exit when the channel is closed.
  • Synchronization:
    • Use sync.WaitGroup to ensure all workers finish before returning from the function.

Example Workflow

  1. Create a channel to pass filenames.
  2. Spawn workers goroutines, each reading from the channel and processing files.
  3. Send filenames to the channel.
  4. Close the channel after all filenames are sent.
  5. Wait for all workers to finish.

Performance Considerations

  • Ensure that the number of workers is optimal for the workload.
  • Avoid deadlocks by using buffered channels or ensuring timely consumption of messages.
16.

Binary Search

Answer

Task Description

Implement a function that performs binary search to find the index of a target integer in a sorted slice. If the target is found, return its index; otherwise, return -1.

Function Signature

package main


func binarySearch(arr []int, target int) int {
// Your code here
return -1
}

Requirements

  1. Input:
    • arr: A sorted slice of integers.
    • target: An integer to search for in the slice.
  2. Output:
    • Return the index of target if found.
    • Return 1 if target is not in the slice.
  3. Behavior:
    • Use a binary search algorithm:
      1. Start with two pointers, low (0) and high (len(arr) – 1).
      2. Calculate mid = (low + high) / 2.
      3. Compare arr[mid] with target:
        • If equal, return mid.
        • If arr[mid] < target, move low to mid + 1.
        • If arr[mid] > target, move high to mid - 1.
      4. Repeat until low > high.
  4. Complexity:
    • Time Complexity: O(log n).
    • Space Complexity: O(1) (iterative implementation).

Example Usage

package main


import "fmt"


func main() {
arr := []int{1, 2, 3, 4, 5}
fmt.Println(binarySearch(arr, 3)) // Output: 2
fmt.Println(binarySearch(arr, 6)) // Output: -1
}

Notes

  • Binary Search:
    • Efficient for searching in a sorted slice.
    • Works by repeatedly dividing the search space in half.
  • Edge Cases:
    • Empty slice.
    • Target less than the smallest element.
    • Target greater than the largest element.
    • Target not in the slice but within the range of elements.

Edge Cases

  1. Empty Slice:
    • Input slice is empty (arr == nil or len(arr) == 0).
    • Output should be 1.
  2. Single Element:
    • Slice has one element:
      • If the target matches the element, return its index (0).
      • Otherwise, return 1.
  3. Duplicates:
    • If the slice contains duplicate elements, any valid index of the target is acceptable.
  4. Not Found:
    • Target is outside the range of the smallest and largest elements.
    • Target is within the range but not in the slice.
  5. Boundary Conditions:
    • Target matches the first or last element in the slice.

Hints

  • Midpoint Calculation:
    • Use mid := low + (high-low)/2 to avoid potential overflow in some environments.
  • Loop Termination:
    • Stop the loop when low > high.
  • Testing for Duplicates:
    • Ensure the returned index corresponds to a valid occurrence of the target, even if there are duplicates.

Example Workflow

  1. Start with the entire range of the slice (low = 0high = len(arr) - 1).
  2. Calculate the midpoint and compare:
    • If arr[mid] == target, return mid.
    • If arr[mid] < target, discard the left half by updating low.
    • If arr[mid] > target, discard the right half by updating high.
  3. Repeat until the range is invalid (low > high).

Performance Considerations

  • Binary search is optimal for sorted slices.
  • Ensure correctness for edge cases like empty slices and single-element slices.
17.

Fan-Out / Fan-In Task

Answer

You need to implement two functions in Go to demonstrate a fan-out/fan-in concurrency pattern:

  1. fanOut(in <-chan int, workers int) []<-chan int
  2. fanIn(channels []<-chan int) <-chan int

Task Requirements

  1. fanOut
    • Signature:
func fanOut(in <-chan int, workers int) []<-chan int {
// your code here
return nil
}
  • Goal:
    • Launch workers goroutines that collectively process all items from the input channel in.
    • Each worker should receive a subset of the input items (ensuring each item is processed exactly once by exactly one worker).
    • Each worker outputs processed results to its own channel.
  • Return: A slice of channels—each one corresponding to a single worker’s output stream.
  1. fanIn
  • Signature:
func fanIn(channels []<-chan int) <-chan int {
// your code here
return nil
}
  • Goal:
    • Merge multiple channels (produced by the fanOut workers) into a single output channel.
    • Once all worker output channels are fully read, the merged output channel should be closed.

Important Details

  • Data Transformation:
    • Unless otherwise specified, assume the data should not be changed. If an integer n enters the in channel, it should appear unchanged on the final merged output channel.
  • Distribution Logic:
    • Each item from in must be processed exactly once.
    • You can decide how to distribute items among workers (e.g., round-robin).
  • Channel Closure:
    • fanOut must properly close each worker’s output channel when processing is finished.
    • fanIn must close its combined output channel once all worker channels are exhausted.
  • Parallelism:
    • Use goroutines to handle concurrency.
    • Each worker should run in a separate goroutine.

Example Usage

func main() {
in := make(chan int)
go func() {
for i := 1; i <= 5; i++ {
in <- i
}
close(in)
}()

// Distribute items among 2 workers
outs := fanOut(in, 2)

// Merge worker outputs
merged := fanIn(outs)

// Print the results
for val := range merged {
fmt.Println(val)
}
}

Hints

  • Consider creating a small “dispatcher” goroutine in fanOut that reads from in and distributes items across worker-specific input channels.
  • Each worker goroutine should read from its own channel, process the data, and send results to its own output channel.
  • In fanIn, you can use a sync.WaitGroup to track when all worker channels have finished sending data. When all are done, close the merged output channel.
18.

Merging Sorted Channels

Answer

Task Description

You need to implement a function:

func mergeSortedChannels(chs []<-chan int) <-chan int {
// Your code here
return nil
}

Requirements

  • You are given multiple channels, each producing sorted integers in non-decreasing order.
  • Your task is to merge these channels into one output channel that yields all integers from the input channels in a globally sorted order.
  • Once you finish reading all input channels, you must close the output channel so that consumers can range over it until completion.

Considerations

  • Because each input channel is sorted, you can think of this as merging multiple sorted lists (but in a streaming fashion).
  • You need to ensure that no integer is lost or duplicated.
  • The final output should be sorted in non-decreasing order.
  • Handle edge cases gracefully, including:
    • Empty channels (some or all input channels might not produce any values).
    • Single channel (merging a single sorted channel is a trivial pass-through).
    • Channels of differing lengths.
    • Duplicate values across channels should appear correctly in sorted order.

Example Usage

func main() {
ch1 := make(chan int)
ch2 := make(chan int)
ch3 := make(chan int)

// Start producers
go func() {
// Sorted values
for _, v := range []int{1, 4, 7, 10} {
ch1 <- v
}
close(ch1)
}()

go func() {
// Sorted values
for _, v := range []int{2, 2, 6, 9} {
ch2 <- v
}
close(ch2)
}()

go func() {
// Sorted values
for _, v := range []int{3, 5} {
ch3 <- v
}
close(ch3)
}()

// Merge
merged := mergeSortedChannels([]<-chan int{ch1, ch2, ch3})

// Consume merged results
for val := range merged {
fmt.Println(val)
}
// Output should be: 1,2,2,3,4,5,6,7,9,10
}

Hints

  • A typical approach might use a min-heap or priority queue (based on the smallest current head among the channels).
  • Alternatively, you could “k-way merge” by reading from each channel in a loop, always taking the smallest available next integer.
  • Ensure you handle closing the output channel once all input channels are exhausted.
19.

Map-Reduce Pattern

Answer

Task Description

You must implement a map-reduce pattern in Go:

func mapReduce(inputs []int, mapper func(int) int, reducer func(int, int) int) int {
// Your code here
return 0
}

Requirements

  • Inputs: A slice of integers (inputs).
  • Mapper: A function func(int) int that transforms each integer into an intermediate integer result.
    • Example: squaring the number, incrementing by 1, turning each integer into “1” to count occurrences, etc.
  • Reducer: A function func(int, int) int that aggregates all intermediate results into a single integer.
    • Example: sum, product, minimum, maximum, etc.
  • Behavior:
    • You must apply mapper to each element of inputs.
    • Then you must reduce all mapper outputs into one final integer result using reducer.
    • Return this final integer.
  • Edge Cases:
    • Empty inputs: The behavior might default to 0 or some neutral value, depending on your design.
    • Single-element input: The result is simply mapper(singleValue).
    • Large input: Implementation should handle any size of input array without issues.

Example Usage

func main() {
inputs := []int{1,2,3,4}
mapper := func(x int) int { return x*x }         // square
reducer := func(a int, b int) int { return a+b } // sum


result := mapReduce(inputs, mapper, reducer)
fmt.Println("Sum of squares:", result)
// Expected: 1+4+9+16 = 30
}

Hints

  • Think of it like two steps:
    1. Map: transform each item i -> mapper(i).
    2. Reduce: combine all mapped results using reducer.
  • You can implement it with a simple loop:
    • Initialize some accumulator with a sensible “zero value.”
    • For each element in inputs, apply mapper first, then incorporate the mapped value into your accumulator using reducer.
  • Decide on how to handle the empty inputs. The typical approach might be to return 0 for operations like sum, or define a special case for multiplication, etc.
20.

Custom Error Wrapping

Answer

Task Description

You need to create a custom error type that wraps another error. Go’s errors package recognizes wrapped errors if your type implements the Unwrap() method to return the wrapped cause. The type should also implement the Error() method.

package main


type MyError struct {
// Your code here, e.g.:
// message string
// cause   error
}


func (e *MyError) Error() string {
// Should return a meaningful error message, possibly combining e.message + e.cause
return ""
}


func (e *MyError) Unwrap() error {
// Should return the wrapped cause if present, or nil otherwise
return nil
}

Requirements

  1. Fields
    • You may store an underlying cause (type error) and a custom message (type string) within your struct.
  2. Error()
    • Returns a string describing the error.
    • Typically includes the custom message and possibly details about the cause.
  3. Unwrap()
    • Returns the wrapped cause (error), if any. If no cause, return nil.
    • This method enables errors.Is and errors.Unwrap to work correctly on your custom error.
  4. Behavior
    • The built-in errors.Unwrap() must retrieve the underlying error from your custom type.
    • errors.Is(yourError, someOtherError) should work as expected if someOtherError is the same as (or part of the chain from) your underlying cause.
  5. Edge Cases
    • Nil cause: If your custom error has no underlying cause, Unwrap() should return nil.
    • Empty message: It’s valid to have an empty message, though typically an error message is expected.

Example Usage

package main


import (
"errors"
"fmt"
)


func main() {
orig := errors.New("original error")
myErr := &MyError{
message: "Something went wrong",
cause:   orig,
}


fmt.Println("MyError says:", myErr) // calls .Error()


// Unwrap the error to see if we get the original cause
fmt.Println("Unwrapped cause:", errors.Unwrap(myErr))
}

Hints

  • Make sure your MyError struct fields are exported or at least accessible within the package, if needed by the tests (though the tests might rely purely on Error() and Unwrap()).
  • Often, the string returned by Error() might look like fmt.Sprintf("%s: %v", e.message, e.cause), but you can customize.
  • If you set cause to nilUnwrap() must return nil. This is important for errors.Is and errors.Unwrap usage.
  • Pay attention to clarity. A common pattern is:
func (e *MyError) Error() string {
if e.cause != nil {
return fmt.Sprintf("%s: %v", e.message, e.cause)
}
return e.message
}
21.

Line-by-Line File Processing

Answer

Task Description

You need to implement a function that reads a file line by line using a buffered approach and processes each line as it is read, without loading the entire file into memory.

func processFile(filename string) error {
// Your code here:
// 1. Open the file
// 2. Create a buffered reader (e.g., bufio.NewReader or bufio.NewScanner)
// 3. Iterate through each line
// 4. Process the line (e.g., print/log/store it)
// 5. Close the file
return nil
}

Requirements

  1. Open the File
    • Return an error if the file cannot be opened.
  2. Read Line by Line
    • Use a buffered reader (e.g., bufio.Scanner or bufio.Reader) to read each line.
    • Do not store the entire file’s contents in memory at once.
  3. Process Each Line
    • After reading a line, do some form of “processing” (printing, logging, or saving to a data structure).
    • The provided tests assume you can handle any lines. Real implementation details can vary.
  4. Close the File
    • Ensure the file is closed when you’re done.
  5. Error Handling
    • If reading fails, return the error.
    • If the file doesn’t exist, return an error (e.g., os.ErrNotExist).

Example Usage

package main


import "fmt"


func main() {
err := processFile("example.txt")
if err != nil {
fmt.Println("Failed to process file:", err)
return
}
fmt.Println("File processed successfully")
}

Hints

  • Use os.Open(filename) to get an os.File.
  • Pass that to a bufio.Scanner or bufio.NewReader.
  • Loop over each line with scanner.Scan() or reader.ReadString('\\n').
  • Handle potential partial lines (EOF without a newline).
  • Be sure to defer file.Close() after successfully opening.
22.

Generic Slice Filtering

Answer

Task Description

Using Go generics, implement a function Filter that:

package main


func Filter[T any](slice []T, pred func(T) bool) []T {
// Your code here
return nil
}

Requirements

  1. Input
    • A slice of any type T.
    • A predicate function func(T) bool.
  2. Output
    • A new slice containing only the elements for which pred returns true.
    • Retain the order of elements that satisfy the predicate.
  3. Constraints & Behavior
    • Do not modify the original slice in place.
    • Return an empty slice if no elements match or if the original slice is empty.
    • Maintain type safety using Go generics.
  4. Edge Cases
    • Empty slice => returns empty slice.
    • No elements match => returns empty slice.
    • All elements match => returns a new slice identical to the original.

Example Usage

package main


import (
"fmt"
)


func main() {
nums := []int{1, 2, 3, 4, 5}
isOdd := func(x int) bool {
return x%2 != 0
}
oddNums := Filter(nums, isOdd)
fmt.Println(oddNums) // [1, 3, 5]

Hints

  • Use a simple loop to accumulate elements for which pred(element) is true.
  • Since T is a generic type, your function should work equally for []int[]string, or other slice types.
  • Return a new slice; do not mutate or reorder the original slice.
23.

Cancel through Context

Answer

Task Description

You need to write a function:

package main


import "context"


func longOperation(ctx context.Context) {
// Your code here
}

Requirements

  1. Long Operation
    • The function simulates a time-consuming (long-running) task.
    • It should run asynchronously if needed, but at least it must respond to context cancellation.
  2. Use context.Context
    • Honor the ctx.Done() channel to detect cancellation or timeout.
    • Terminate the operation as soon as ctx.Done() is signaled, rather than continuing to completion.
  3. Behavior
    • If the context is not canceled, the function should simulate completing its work.
    • If the context is canceled or times out, the function should stop early and return promptly.
  4. Edge Cases
    • Immediate cancellation: If the context is canceled before starting, your function should return quickly.
    • Long running: The function might have loops or periodic checks for ctx.Err().

Example Usage

package main


import (
"context"
"fmt"
"time"
)


func main() {
// Example: Start a long operation with a timeout
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()


longOperation(ctx)
fmt.Println("Operation done (either completed or canceled)")
}

Hints

  • A typical pattern is something like:
for {
// do partial work
select {
case <-ctx.Done():
// context canceled or timed out
return
default:
// continue
}
}
  • If your operation does smaller sub-steps or sleeps, just ensure each iteration checks ctx.Done().
  • Use context.WithCancelcontext.WithTimeout, or context.WithDeadline in tests to confirm cancellation/timeout.
  • Make sure to exit cleanly and not block if the context is canceled.
24.

0/1 Knapsack with Dynamic Programming

Answer

Task Description

Implement a 0/1 knapsack function:

package main


func knapsack(weights, values []int, capacity int) int {
// Your code here
return 0
}

Requirements

  1. Inputs
    • weights: Slice of item weights (each corresponds to an item).
    • values: Slice of item values, same length as weights.
    • capacity: Maximum weight capacity of the knapsack.
  2. Output
    • Integer representing the maximum total value that can be achieved without exceeding capacity.
  3. Constraints
    • This is the classic 0/1 knapsack problem, meaning each item may be taken or not taken (no partial item).
  4. Approach
    • Use dynamic programming to compute the best solution:
      • Typically a 2D table approach: dp[i][w] storing the best possible value using items up to i with capacity w.
      • Or a 1D rolling array optimization is also acceptable.
  5. Edge Cases
    • Empty weights/values: The result is 0.
    • Zero capacity: No items can be taken, so result is 0.
    • All items fit: Then result is the sum of all values.
    • Large capacity with big or small items.
    • Weights and values must have the same length (may assume the caller ensures this).

Example Usage

package main


import "fmt"


func main() {
weights := []int{1, 3, 4}
values := []int{15, 50, 60}
capacity := 4


maxVal := knapsack(weights, values, capacity)
fmt.Println("Max knapsack value:", maxVal) // Should print 65 for items (1 & 2)
}

Hints

  • A common DP solution is:
    1. Initialize a table dp[len(weights)+1][capacity+1] to 0.
    2. For each item i, for each possible weight w from 0..capacity:
      • Either you don’t take item i, so dp[i][w] = dp[i-1][w].
      • Or you take item i (if it fits), so dp[i][w] = dp[i-1][w - weights[i]] + values[i].
      • Then dp[i][w] = max(of those two).
    3. Return dp[len(weights)][capacity].
  • You can also optimize space usage to 1D, iterating weights from high to low to avoid overwriting needed subproblems.
25.

Generating All Permutations

Answer

Task Description

Implement a function that generates all permutations of a given slice of integers:

package main
func permutations(nums []int) [][]int {
// Your code here
return nil
}

Requirements

  1. Input
    • A slice of integers (e.g., [1,2,3]).
  2. Output
    • All unique permutations of the input slice, each permutation represented as a slice of integers.
    • The function returns a slice of these permutations (a [][]int).
  3. Behavior
    • Permutations are the distinct arrangements of the given elements.
    • For n distinct elements, there are n! permutations.
    • If duplicates exist in nums, permutations containing the same elements in the same order should not be repeated multiple times.
  4. Edge Cases
    • Empty slice => The only “permutation” is the empty slice []. Return [[]].
    • Single element => Return [ [elem] ].
    • Duplicates => Must avoid generating the same permutation multiple times.
  5. Example
    • Input: [1, 2, 3]
      • Output: [ [1,2,3], [1,3,2], [2,1,3], [2,3,1], [3,1,2], [3,2,1] ] (order can vary).

Example Usage

package main
import "fmt"
func main() {
data := []int{1, 2, 3}
perms := permutations(data)
fmt.Println("Permutations of", data, "are:", perms)
}

Hints

  • You can generate permutations by:
    1. Recursive approach:
      • Swap elements in the slice, recurse, swap back.
      • Keep track of used elements or do a backtracking approach.
    2. Use a helper that builds the result by picking each element in turn, recursing on the remainder.
    3. Handle duplicates by sorting the input and skipping repeated elements in the recursion or by using a map/set approach.
  • Make sure to avoid generating the same permutation multiple times if the input contains duplicates.
  • The test includes a case with duplicates ([1, 1, 2]). You must ensure your solution returns each distinct arrangement exactly once.
26.

Concurrency-Safe FIFO Queue

Answer

Task Description

You need to implement a FIFO queue (SafeQueue) that supports concurrent access from multiple goroutines. The queue should offer two methods:

type SafeQueue struct {
// Your code here
}


func (q *SafeQueue) Enqueue(val interface{}) {
// Your code here
}


func (q *SafeQueue) Dequeue() (interface{}, bool) {
// Your code here
return nil, false
}

Requirements

  1. FIFO Behavior
    • First item Enqueued should be the first item Dequeued.
  2. Concurrency Safety
    • Multiple goroutines may call Enqueue or Dequeue at the same time.
    • Must avoid race conditions. Usually, this is done with a sync.Mutex, or a channel-based approach, or sync.Cond.
    • Dequeue must not panic if the queue is empty; it should return (nil, false) or an equivalent indicator.
  3. Return Values
    • Enqueue(val interface{}): no return, just add val to the end.
    • Dequeue() (interface{}, bool): returns (theItem, true) if successful, or (nil, false) if the queue is empty.
  4. Edge Cases
    • Empty queue on dequeue => return (nil, false).
    • Concurrency with large number of Enqueue and Dequeue calls.
    • Ensuring no data lost or duplicated.

Example Usage

package main


func main() {
var q SafeQueue
q.Enqueue("hello")
q.Enqueue("world")


val, ok := q.Dequeue()
// val => "hello", ok => true
val, ok = q.Dequeue()
// val => "world", ok => true
val, ok = q.Dequeue()
// val => nil, ok => false (empty queue)
}

Hints

  • A simple approach is to store items in a slice:
    • Use a sync.Mutex to guard access to this slice.
    • Enqueue appends to the slice.
    • Dequeue removes from the front if not empty.
  • Watch out for slice re-slicing or shifting overhead if you remove from the front. An alternative is to keep front/back indexes.
  • Ensure Dequeue returns immediately when empty rather than blocking. (Unless a blocking approach is desired, but typically the specification just returns (nil, false)).
27.

Trie Implementation

Answer

Task Description

You need to implement a Trie data structure with the following methods:

package main


type Trie struct {
// Your code here
}


func (t *Trie) Insert(word string) {
// Your code here
}


func (t *Trie) Search(word string) bool {
// Your code here
return false
}

Requirements

  1. Insert(word string)
    • Inserts the string word into the trie.
  2. Search(word string) bool
    • Returns true if word exists in the trie, false otherwise.
  3. Data Structure
    • The trie typically has a root node that may contain up to 26 children (for a basic a-z scenario) or a map of runes if handling Unicode or arbitrary characters.
    • Each child node may represent the next character in the path.
    • One common approach is to store a boolean endOfWord (or isTerminal) flag in nodes to mark complete words.
  4. Edge Cases
    • Empty String: Decide if you allow an empty string to be inserted and searched. If so, handle carefully in your data structure.
    • Duplicate Inserts: Inserting the same word multiple times shouldn’t break anything. Searching should still return true.
    • Partial Matches: Searching for a prefix (e.g., “app” when only “apple” is inserted) should return false unless you consider prefixes as valid words in your design.
  5. Behavior
    • The tests assume standard trie behavior: partial prefixes do not count as found words unless explicitly inserted.
    • Search must traverse the trie nodes based on each character; if any link is missing, or if the final node isn’t marked as end of a word, return false.

Example Usage

package main


import "fmt"


func main() {
trie := &Trie{}
trie.Insert("hello")
fmt.Println(trie.Search("hello")) // true
fmt.Println(trie.Search("hel"))   // false
}

Hints

  • Node Structure:
    • A node might have a map or fixed array of children.
    • A boolean endOfWord to mark if a node completes a word.
  • Insert:
    • Traverse from the root node, create child nodes if missing, then mark endOfWord on the final node.
  • Search:
    • Traverse from the root node, following each character’s child link.
    • If at any point the link doesn’t exist, return false.
    • At the end, check if the final node’s endOfWord is true.
28.

Fenwick (Binary Indexed) Tree

Answer

Task Description

You need to implement a Fenwick Tree (also called a Binary Indexed Tree, or BIT) for prefix sums:

package main


type Fenwick struct {
// Your code here
}


func NewFenwick(size int) *Fenwick {
// Your code here
return nil
}


func (f *Fenwick) Update(i, delta int) {
// Your code here
}


func (f *Fenwick) PrefixSum(i int) int {
// Your code here
return 0
}

Requirements

  1. Fenwick Tree Structure
    • Backed by an array (often 1-indexed) where each element covers a range of indices determined by its last set bit.
    • Typically tree[i] stores the partial sum of elements from i - (i & -i) + 1 through i inclusive.
  2. Update(i, delta int)
    • Increases the value at index i by delta.
    • Then propagate that update to all relevant Fenwick tree nodes.
  3. PrefixSum(i int) int
    • Returns the sum of elements from index 1 up to index i (inclusive).
    • Traverses the Fenwick tree array in a way that accumulates partial sums.
  4. 1-Based Indexing
    • Often Fenwick trees are implemented using 1-based indexing. For a tree of size n, valid indices are 1..n.
    • If your input is 0-based, you may need to adjust by +1.
  5. Complexities
    • Both Update and PrefixSum should work in O(log n) time.
  6. Edge Cases
    • Size=1: minimal tree.
    • Index out of bounds: Usually the user of the tree is expected to remain in [1..size].
    • Negative or zero updates are allowed and should function normally.

Example Usage

package main


import "fmt"


func main() {
fenw := NewFenwick(5)
fenw.Update(1, 2)
fenw.Update(3, 5)
// PrefixSum(3) => 2 (index 1) + 5 (index 3) = 7
fmt.Println("PrefixSum(3) =", fenw.PrefixSum(3))
}

Hints

  • For a Fenwick tree, the typical code patterns are:
    • Update(i, delta):
for i <= size {
tree[i] += delta
i += i & (-i)
}

PrefixSum(i):

result := 0
for i > 0 {
result += tree[i]
i -= i & (-i)
}
return result

Make sure you handle 1-based indexing consistently.

29.

Graceful Shutdown Server

Answer

Task Description

You must implement a server that:

  1. Listens for incoming connections or requests (e.g., HTTP server).
  2. Gracefully shuts down upon receiving SIGINT or SIGTERM signals.
  3. Prints a message (e.g. “Shutting down gracefully…”) before exiting.
package main


func runServer() {
// Your code here
// 1. Start a server (e.g., an http.Server on some port).
// 2. Listen for OS signals SIGINT/SIGTERM in a separate goroutine or use signal.Notify.
// 3. When signal is received, shut down the server gracefully (e.g. server.Shutdown(ctx)).
// 4. Print "Shutting down gracefully" or any final message.
}
30.

Round-Robin Load Balancer

Answer

Task Description

Implement a round-robin load balancer with a list of server endpoints. Each time Next() is called, return the next endpoint in the list, cycling back to the first endpoint after reaching the end.

package main


type RoundRobin struct {
// Your code here (e.g. slice of endpoints, current index, etc.)
}


func NewRoundRobin(endpoints []string) *RoundRobin {
// Your code here
return nil
}


func (r *RoundRobin) Next() string {
// Your code here
return ""
}

Requirements

  1. Endpoints
    • A slice of server endpoints, e.g. []string{"server1", "server2", "server3"}.
    • If the slice is empty, decide how to handle Next() (return an empty string, or handle differently).
  2. Round-Robin Behavior
    • Each call to Next() should return the next endpoint in a cycle.
    • After the last endpoint, wrap around to the first.
  3. Thread Safety
    • (Optional) If concurrency is needed, you might protect the current index with a mutex.
    • The tests here do not specifically test concurrency, so a basic approach is fine.
  4. Edge Cases
    • Single Endpoint: Repeatedly return the same endpoint.
    • Empty EndpointsNext() should return "".

Example Usage

package main


import "fmt"


func main() {
lb := NewRoundRobin([]string{"s1", "s2", "s3"})
for i := 0; i < 5; i++ {
fmt.Println(lb.Next())
// s1, s2, s3, s1, s2 ...
}
}

Hints

  • Keep a current index. Each call to Next() returns the endpoint at current index, then increments the index modulo the length of endpoints.
Golang Developer hiring resources
Hire Golang Developers
Hire fast and on budget—place a request, interview 1-3 curated developers, and get the best one onboarded by next Friday. Full-time or part-time, with optimal overlap.
Hire now
Q&A about hiring Golang Developers
Want to know more about hiring Golang Developers? Lemon.io got you covered
Read Q&A
Golang Developer Job Description Template
Attract top Golang developers with a clear, compelling job description. Use our expert template to save time and get high-quality applicants fast.
Check the Job Description

Hire remote Golang developers

Developers who got their wings at:
Testimonials
star star star star star
Gotta drop in here for some Kudos. I’m 2 weeks into working with a super legit dev on a critical project, and he’s meeting every expectation so far 👏
avatar
Francis Harrington
Founder at ProCloud Consulting, US
star star star star star
I recommend Lemon to anyone looking for top-quality engineering talent. We previously worked with TopTal and many others, but Lemon gives us consistently incredible candidates.
avatar
Allie Fleder
Co-Founder & COO at SimplyWise, US
star star star star star
I've worked with some incredible devs in my career, but the experience I am having with my dev through Lemon.io is so 🔥. I feel invincible as a founder. So thankful to you and the team!
avatar
Michele Serro
Founder of Doorsteps.co.uk, UK

Simplify your hiring process with remote Golang developers

Popular Golang Development questions

Will AI replace Golang developers?

AI will not replace Golang developers. Developers are essential for creating complex architectures, optimizing code, and understanding system requirements. AI may assist in coding tasks, but human expertise ensures application reliability and scalability.

Is Golang front-end or back-end?

Golang is primarily a back-end language. It is widely used for building APIs, server-side logic, and distributed systems. While it can handle some front-end tasks through WebAssembly, its main focus is on back-end development.

What is Golang used for?

Golang, or Go, is used for building scalable and efficient applications, including web servers, microservices, network tools, and cloud-native applications. Its performance, simplicity, and concurrency model make it ideal for high-performance systems and modern distributed architectures.

What is the difference between Go and C++?

Go is simpler and easier to learn than C++, with a focus on readability and concurrency. C++ offers more control over system resources and is better for low-level programming, while Go excels in modern web applications, microservices, and cloud-native environments due to its built-in concurrency and garbage collection.

What are the advantages of using Golang for web development?

Golang’s advantages for web development include its high performance, simplicity, and scalability. Its built-in HTTP library makes it easy to create web servers, and its concurrency model ensures efficient handling of multiple requests. Go’s compiled nature results in fast execution and deployment, making it ideal for modern web applications.

image

Ready-to-interview vetted Golang developers are waiting for your request