Interview Quation
Interview Quation
Interview Quation
1. how to find out memory leaks in application and prevent? tools to see memory
leaks ?
--> using JProfiler - JProfiler provides interfaces for viewing system
performance, memory usage, potential memory leaks, and thread profiling.
--> IntelliJ Profiler
--> Java Visual VM
--> verbose:gc - By enabling verbose garbage collection, we can track the
detailed trace of the GC. To enable this.
--> Eclipse Memory Leak Warnings
--> by doing Code Reviews
2. Where do we define code coverge check list? and what are the main check list?
Doing with inteliJI : https://www.jetbrains.com/help/idea/creating-custom-
inspections.html
Doing in Sonar Qube:
.Create a SonarQube plugin.
.Put a dependency on the API of the language plugin for which you are writing
coding rules.
.Create as many custom rules as required.
.Generate the SonarQube plugin (jar file).
.Place this jar file in the SONARQUBE_HOME/extensions/plugins directory.
.Restart SonarQube server.
Main Checklist:
.Functional Requirement : does code meet all requirments wich it should meet.
.Side effect on existing code : Sometimes one change in your system may cause
a bug in other upstream and downstream systems
.Concurrency : code is thread-safe? Does it have properly synchronized if
using the shared resources?
.Readability and maintenance: Does the code is readable? Or is it too
complicated
.Consistency: same function should use
.Performance:loop or multiple times optimize the code which is going to
execute more often.
.Error and Exception handling:
.Simplicity:
.Reuse of existing code
.Unit Test covered for all functions
3.how to handle load balance and where we need mention ? azure cloud/Aws cloud /
spring boot?
11.AWS lambada?
AWS Lambda is an event-driven, serverless computing service that lets you run
code without provisioning or managing servers. With Lambda, you can upload your
code as a ZIP file or container image, and Lambda automatically and precisely
allocates compute execution power and runs your code based on the incoming request
or event. You can write Lambda functions in your favorite language (Node.js,
Python, Go, Java, and more) and use both serverless and container tools, such as
AWS SAM or Docker CLI, to build, test, and deploy your functions.
12.which AWS service are used ?
https://www.eginnovations.com/blog/top-10-aws-services-explained-with-use-cases/
AWS lambada
AWS DynamoDB – NoSQL Database Services
AWS EKS – Elastic Kubernetes Service
noClassDefFoundError occurs when the class was present during compile time
and the program was compiled and linked successfully but the class was not present
during runtime. It is an error that is derived from LinkageError
Bootstrap Class Loader – It loads JDK internal classes. It loads rt.jar and
other core classes for example java.lang.* package classes.
Extensions Class Loader – It loads classes from the JDK extensions directory,
usually $JAVA_HOME/lib/ext directory.
System Class Loader – This classloader loads classes from the current
classpath. We can set classpath while invoking a program using -cp or -classpath
command line option.
Stream works internaly using Linkedlist structure and in its internal storage
structure, each stage gets assigned a bitmap that follows this structure:
SIZED The size of the stream is known
DISTINCT The elements in the stream are unique, no duplicates.
SORTED Elements are sorted in a natural order.
ORDERED Wether the stream has a meaningful encounter order; this means that the
order in which the elements are streamed should be preserved on collection.
bitmap expl like : 1010
flatMap() can be used where we have to flatten or transform out the string, as
we cannot flatten our string using map(). Produce a stream of stream value.
List of list-[[1, 2], [3, 4], [5, 6], [7, 8]]
List generate by flatMap-[1, 2, 3, 4, 5, 6, 7, 8]
EXP : List<Integer> flatList
= number.stream()
.flatMap(list -> list.stream())
.collect(Collectors.toList());
20. What is api gate way ? how many api gateways are availble ?
API Gateway acts as a mediator between client applications and backend services
in microservices architecture
Microsoft Azure API Management, Kong Gateway
https://docs.spring.io/spring-framework/reference/integration/scheduling.html#
@Configuration
@EnableAsync
@EnableScheduling
public class AppConfig {
}
@Scheduled(cron="*/5 * * * * MON-FRI")
public void doSomething() {
// something that should run on weekdays only
}
@Async
void doSomething() {
// this will be run asynchronously
}
@Component
public class Consumer {
@KafkaListener(topics = "test")
public void processMessage(String content){
System.out.println("Message received: " + content);
}
}
application.prop
----------------
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.consumer.group-id=myGroup
spring.kafka.producer.bootstrap-servers: localhost:9092
spring.kafka.producer.key-serializer:
org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer:
org.apache.kafka.common.serialization.StringSerializer
---------------------------
Multiple consumers in the same consumer group
@Component
public class Consumer {
@KafkaListener(topics = "test", concurrency = "2" groupId = "myGroup")
public void processMessage(String content) {
System.out.println("Message received: " + content);
}
}
----------------------------------------------
@Service
public class Producer {
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
jwtTokenUtil.validateToken(jwtToken, userDetails)
//validate token
public Boolean validateToken(String token, UserDetails userDetails) {
final String username = getUsernameFromToken(token);
return (username.equals(userDetails.getUsername()) && !
isTokenExpired(token));
}
There are two type of saga implementation ways, These are “choreography” and
“orchestration”.
Producer: A producer is a client that sends messages to the Kafka server to the
specified topic.
Consumer: Consumers are the recipients who receive messages from the Kafka
server.
Broker: Brokers can create a Kafka cluster by sharing information using
Zookeeper. A broker receives messages from producers and consumers fetch messages
from the broker by topic, partition, and offset.
Cluster: Kafka is a distributed system. A Kafka cluster contains multiple
brokers sharing the workload.
Topic: A topic is a category name to which messages are published and from
which consumers can receive messages.
Partition: Messages published to a topic are spread across a Kafka cluster
into several partitions. Each partition can be associated with a broker to allow
consumers to read from a topic in parallel.
Offset: Offset is a pointer to the last message that Kafka has already sent
to a consumer.
37. how will you find your exact request in distributed enivroment?
Implement microservices logging. ...
Complement logging with Crash Reporting. ...
Generate a unique ID for each request to trace microservices. ...
Prepare each microservice for accepting and storing request IDs. ...
Create and implement your own logging patterns. ...
Use a logging framework.
1. Decomposition Patterns:
Decompose by Business Capability - split thes service by bussiness
like, sales, marketing,claims,billing etc
Decompose by Subdomain - Each subdomain will have a model, and the
scope of that model will be called the bounded context
Strangler Pattern - The Strangler pattern comes to the rescue. The
Strangler pattern is based on an analogy to a vine that strangles a tree
2. Integration Patterns:
API Gateway Pattern - each service has single point of contact for
multile services
Aggregator Pattern - multiple service response should aggregating
before sending to client
Client-Side UI Composition Pattern - client side should implement
supporate grids for multiple service response
3. Database Patterns
Database per Service - each serivce must be having different data base
Shared Database per Service - some time shared data base for one or
more services, exceptional case
Command Query Responsibility Segregation (CQRS) - fetching data is
defacult in one query, so spliting into command and query
Saga Pattern - failure service should have data consistency
4. Observability Patterns
Log Aggregation - We need a centralized logging service that aggregates
logs from each service instance. Users can search and analyze the logs.
Performance Metrics - A metrics service is required to gather
statistics about individual operations.
Distributed Tracing - Assigns each external request a unique external
request id.Passes the external request id to all services.
Health Check : /helth end point in springboot
5. Cross-Cutting Concern Patterns
External Configuration - Externalize all the configuration, including
endpoint URLs and credentials
Service Discovery Pattern - service register should create for all
produce service for dynamic ip
Circuit Breaker Pattern - calling external service, if it is down
stream cases. need to implement circuit breaker pattaren
Blue-Green Deployment Pattern - exisiting env green and latest version
env blue
@Profile("!mock")
OutputPort realOutputPort(){
return new RealOutputPort();
}
51.Stream collect() and what will happen if not providing collect(),max() methods
in filetr.
What happens when you terminate a stream with a filter? Nothing happens. It
just creates another stream, and this is not the functionality of a stream.
An Example:
@override preHandler();
@override postHandler();
@override afterCompoletion();
Cached Thread Pool : A thread pool that creates as many threads it needs to
execute the task in parrallel. The old available threads will be reused for the new
tasks. If a thread is not used during 60 seconds, it will be terminated and removed
from the pool. Method : Executors.newCachedThreadPool()
Fixed Thread Pool : A thread pool with a fixed number of threads. If a thread
is not available for the task, the task is put in queue waiting for an other task
to ends. Method : Executors.newFixedThreadPool()
Scheduled Thread Pool : A thread pool made to schedule future task. Method :
Executors.newScheduledThreadPool()
Single Thread Scheduled Pool : A thread pool with only one thread to schedule
future task. Method : Executors.newSingleThreadScheduledExecutor()
59. how to ignore some endpoints like, GET mthods in JWT and microservices ?
@Configuration
@Profile("test")
public class ApplicationNoSecurity {
@Bean
public WebSecurityCustomizer webSecurityCustomizer() {
return (web) -> web.ignoring()
.antMatchers("/**");
}
}
63. what is kafaka partition and for one partion how many consumers ?
Parttion enables messges to be split in parllel across several brokers in the
cluster.
Kafka assigns the partitions of a topic to the consumers in a group so that
each partition is assigned to one consumer in the group. This ensures that records
are processed in parallel and nobody steps on other consumers'
67. diff bw Query, Native Query, Named query and Typed Query?
Query refers to JPQL/HQL query with syntax similar to SQL generally used to
execute DML statements(CRUD operations).
-----
In JPA, you can create a query using entityManager.createQuery(). You can
look into API for more detail.
NativeQuery:
-----------
Native query refers to actual sql queries (referring to actual database
objects). These queries are the sql statements which can be directly executed in
database using a database client.
NamedQuery:
-----------
Similar to how the constant is defined. NamedQuery is the way you define your
query by giving it a name. You could define this in mapping file in hibernate or
also using annotations at entity level.
TypedQuery:
----------
TypedQuery gives you an option to mention the type of entity when you create
a query and therefore any operation thereafter does not need an explicit cast to
the intended type. Whereas the normal Query API does not return the exact type of
Object you expect and you need to cast.
When you have a number of dependent services, failure in one component might
have a wider impact on a number of components
1.Closed – In this state, all connection requests are allowed, and service
communication is intact. During normal processing, the circuit breaker is in a
closed state. However, if set failure thresholds are exceeded, the state changes to
“open.”
2.Open – In this state, all connection requests are blocked to allow the
recovering service to be flooded with requests.
3.Half-open – In this state, a small number of connections are allowed to
pass through at regular intervals to test the service’s availability. If the
requests are successful, then the circuit breaker assumes that the service issue
has been resolved and switches it to the closed state. If the requests are not
successful, then the circuit breaker assumes that the service issue still exists
and switches back to the open state.
71. is your application is having single connection pool? are mutilple connection
pool?
Some module have single connection pools, and we can implement mutilple
connection pool if having mutiple dabase using.
APIs: These provide a way for services to communicate with each other.
Schedulers: These control when services run and how they interact.
Service Monitoring: This ensures that your microservices are running properly
and collect data for analysis.
API gateway: An entry point for client requests that routes requests to the
appropriate microservices