Top JavaScript Backend Interview Questions for MNCs (2025-2026 Edition)
Multinational companies (MNCs) like Google, Amazon, Netflix, and Microsoft focus less on syntax trivia and more on scalability, system design, and deep internal knowledge of the runtime. This guide covers the most high-impact topics.

Part 1: Core JavaScript Deep Dives
Focus: How the language actually works under the hood.
1. Explain the Event Loop in detail. How do microtasks differ from macrotasks?
Answer:
The Event Loop is the mechanism that allows Node.js to perform non-blocking I/O operations despite being single-threaded. It offloads operations to the system kernel whenever possible.
Call Stack: Executes synchronous code.
Macrotasks (Task Queue):
setTimeout,setInterval,setImmediate, I/O callbacks.Microtasks (Microtask Queue):
Promisecallbacks (.then/.catch),process.nextTick.
Critical Distinction: The Event Loop checks the Microtask Queue after every completed task in the Call Stack and before moving to the next Macrotask. This means process.nextTick and Promises can starve the Event Loop if they recursively schedule themselves.
2. How does the this keyword work? (Arrow functions vs. Regular functions)
Answer:
Regular Functions:
thisis dynamic. It depends on how the function is called (e.g., global scope, object method, or bound explicitly viacall/apply).Arrow Functions:
thisis lexically scoped. It inheritsthisfrom the parent scope at the time of definition and cannot be changed withbind,call, orapply.
3. What is a Closure? Give a practical backend use case.
Answer:
A closure is a function that remembers its outer variables and can access them even when the outer function has finished executing.
Backend Use Case: creating "private" variables or caching/memoization.
function createDatabaseConnection() {
const connectionString = "postgres://user:pass@localhost:5432"; // Private variable
return {
connect: () => console.log(`Connecting to ${connectionString}`)
};
}
Part 2: Node.js Architecture & Performance
Focus: Runtime specifics and handling high traffic.
4. Node.js is single-threaded. How does it handle concurrency?
Answer:
Node.js uses the Reactor Pattern. The main thread is single-threaded (Event Loop), but I/O operations (file system, network calls, DNS) are delegated to libuv, which maintains a thread pool (default 4 threads) written in C++. When these background threads finish, they push the callback to the Event Queue for the main thread to execute.
5. Streams vs. Buffers: When to use which?
Answer:
Buffer: A temporary memory space for raw binary data. Good for small data packets.
Stream: A sequence of data elements made available over time. Why it matters: If you load a 1GB file using a Buffer, you need 1GB of RAM. If you use a Stream, you process it in small chunks (e.g., 64KB), keeping memory usage low. This is essential for building scalable video streaming services or file processing pipelines.
6. Explain process.nextTick() vs setImmediate().
Answer:
process.nextTick()fires immediately after the current operation completes, before the Event Loop continues. It has the highest priority and can block I/O.setImmediate()fires in the Check phase of the Event Loop, usually after I/O callbacks.
Part 3: Backend System Design
Focus: Architecture, Scalability, and Trade-offs.
7. Design a Rate Limiter (e.g., for an API).
Core Concept: You need to limit the number of requests a user can make in a given timeframe (e.g., 100 requests/minute).
Algorithms:
Token Bucket: Tokens are added at a constant rate. Requests consume tokens. Good for allowing "bursts" of traffic.
Leaky Bucket: Requests enter a queue and are processed at a constant rate. Good for smoothing out traffic.
Fixed Window Counter: Simple but has an edge case where 2x traffic can occur at the window boundary.
Sliding Window Log: Most accurate but memory expensive.
Storage: Redis (using
INCRandEXPIREcommands) is the standard choice because it's fast and supports atomic operations.
8. How would you design a Distributed Counter?
Scenario: You need to count "views" on a YouTube video served by 50 different servers.
Challenge: Writing to a single database row 50,000 times/second will lock the row and crash the DB.
Solution:
Sharding: Split the counter into N sub-counters (e.g., 50 rows in the DB).
Random Write: Each server picks a random row (1-50) to increment.
Aggregation: When reading the total views, query
SELECT SUM(count) FROM video_counters WHERE video_id = X.Redis: Alternatively, use Redis
INCRwhich is atomic and extremely fast, then periodically flush to a persistent DB (Write-Behind strategy).
9. How do you handle "Hot Keys" in a Caching System (e.g., Redis)?
Problem: If a celebrity tweets, millions of people request the same profile simultaneously. All requests hit the same Redis shard, overloading it.
Solution:
Local Caching: Store the hot data in the application server's memory (RAM) for a few seconds.
Key Replication: Create
key_1,key_2,key_3with copies of the data. Randomly route users to different copies to spread the load.
Part 4: Coding Challenges (Live Coding)
MNCs often ask you to implement standard library functions or data structures.
10. Implement Promise.all Polyfill
Question: Write a function myPromiseAll that behaves like Promise.all.
function myPromiseAll(promises) {
return new Promise((resolve, reject) => {
let results = [];
let completed = 0;
if (promises.length === 0) {
resolve([]);
return;
}
promises.forEach((promise, index) => {
// Use Promise.resolve to handle non-promise values
Promise.resolve(promise)
.then((value) => {
results[index] = value;
completed++;
if (completed === promises.length) {
resolve(results);
}
})
.catch((err) => {
reject(err); // Fail fast behavior
});
});
});
}
// Test
const p1 = Promise.resolve(3);
const p2 = 42;
const p3 = new Promise((resolve, reject) => {
setTimeout(resolve, 100, 'foo');
});
myPromiseAll([p1, p2, p3]).then(console.log); // [3, 42, "foo"]
11. Implement an LRU (Least Recently Used) Cache
Question: Design a cache that evicts the least recently used item when capacity is reached. Operations get and put must be O(1).
Hint: Use a Map (which preserves insertion order in JS) or a combination of a Doubly Linked List and a Hash Map.
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.cache = new Map();
}
get(key) {
if (!this.cache.has(key)) return -1;
// "Refresh" the item by deleting and re-inserting
const value = this.cache.get(key);
this.cache.delete(key);
this.cache.set(key, value);
return value;
}
put(key, value) {
if (this.cache.has(key)) {
this.cache.delete(key); // Remove old version
} else if (this.cache.size >= this.capacity) {
// Evict the least recently used (first item in Map)
// keys().next().value gets the first key
this.cache.delete(this.cache.keys().next().value);
}
this.cache.set(key, value); // Insert at the end (most recent)
}
}
Part 5: Behavioral & Soft Skills
"Tell me about a time you debugged a difficult production issue."
Tip: Use the STAR method (Situation, Task, Action, Result). Mention tools like Profilers, Heap Snapshots, or Distributed Tracing (Jaeger/Datadog).
"How do you handle disagreement with a Product Manager on a feature?"
Tip: Focus on data-driven decisions and trade-offs (e.g., "I explained that feature X would increase latency by 200ms, which might drop conversion...").
"Describe a time you had to compromise on code quality to meet a deadline."
Tip: Frame this as "Technical Debt" management. Explain that you made a calculated decision to ship fast but created a ticket/plan to refactor it later. It shows you understand business priorities vs. engineering perfection.
"How do you approach reviewing a junior developer's code?"
Tip: Emphasize empathy and constructive feedback. Mention balancing "nitpicks" (style) vs. architectural issues. "I look for logic errors first, then readability, and I always explain why I'm suggesting a change."
"Tell me about a major mistake you made that took down production."
Tip: Do not hide it. Own the mistake immediately. Focus heavily on the post-mortem: how you fixed it, and more importantly, the automated safeguards (tests, canary deployments) you built to prevent it from ever happening again.
Like
Share
# Tags