Advanced Core Java
Advanced Core Java
By Fayaz
JVM Architecture:
Languages like JavaScript and Python, the computer executes the instructions directly without having to
compile them. These languages are called interpreted languages.
Java uses a combination of both techniques. Java code is first compiled into byte code to generate a
class file. This class file is then interpreted by the Java Virtual Machine for the underlying platform. The
same class file can be executed on any version of JVM running on any platform and operating system.
1. Class Loader
2. Runtime Memory/Data Area
3. Execution Engine
1.Class Loaders:
There are three phases in the class loading process: loading, linking, and initialization.
Loading
Loading involves taking the binary representation (bytecode) of a class or interface with a particular
name, and generating the original class or interface from that.
Bootstrap Class Loader - This is the root class loader. It is the superclass of Extension Class Loader and
loads the standard Java packages like java.lang, java.net, java.util, java.io, and so on. These packages are
present inside the rt.jar file and other core libraries present in the $JAVA_HOME/jre/lib directory.
Extension Class Loader - This is the subclass of the Bootstrap Class Loader and the superclass of the
Application Class Loader. This loads the extensions of standard Java libraries which are present in the
$JAVA_HOME/jre/lib/ext directory.
Application Class Loader - This is the final class loader and the subclass of Extension Class Loader. It
loads the files present on the classpath. By default, the classpath is set to the current directory of the
application. The classpath can also be modified by adding the -classpath or -cp command line option.
The JVM uses the ClassLoader.loadClass() method for loading the class into memory. It tries to load the
class based on a fully qualified name.
If a parent class loader is unable to find a class, it delegates the work to a child class loader. If the last
child class loader isn't able to load the class either, it throws NoClassDefFoundError or
ClassNotFoundException.
Linking
After a class is loaded into memory, it undergoes the linking process. Linking a class or interface involves
combining the different elements and dependencies of the program together.
For example, if the code has been built using Java 11, but is being run on a system that has Java 8
installed, the verification phase will fail.
Preparation: In this phase, the JVM allocates memory for the static fields of a class or interface, and
initializes them with default values.
For example, assume that you have declared the following variable in your class:
During the preparation phase, JVM allocates memory for the variable enabled and sets its value to the
default value for a boolean, which is false.
Resolution: In this phase, symbolic references are replaced with direct references present in the runtime
constant pool.
For example, if you have references to other classes or constant variables present in other classes, they
are resolved in this phase and replaced with their actual references.
Initialization
Initialization involves executing the initialization method of the class or interface (known as <clinit>).
This can include calling the class's constructor, executing the static block, and assigning values to all the
static variables. This is the final stage of class loading.
The variable enabled was set to its default value of false during the preparation phase. In the
initialization phase, this variable is assigned its actual value of true.
Note: the JVM is multi-threaded. It can happen that multiple threads are trying to initialize the same
class at the same time. This can lead to concurrency issues. You need to handle thread safety to ensure
that the program works properly in a multi-threaded environment.
JVM Memory Structure is divided into multiple memory area like heap area, stack area, method area, PC
Registers etc.
Metaspace: Metaspace is a new memory space – starting from the Java 8 version; it has replaced the
older PermGen memory space. The metaspace holds all the reflective data of the virtual machine itself,
such as class metadata, classloader related data. Garbage collection of the dead classes and classloaders
is triggered once the class metadata usage reaches the “MaxMetaspaceSize”.
Heapspace: All the objects and their corresponding instance variables are stored here. This is the run-
time data area from which memory for all class instances and arrays is allocated.
Stack Area:Whenever a new thread is created in the JVM, a separate runtime stack is also created at the
same time. All local variables, method calls, and partial results are stored in the stack area.
If the processing being done in a thread requires a larger stack size than what's available, the JVM
throws a StackOverflowError.
For every method call, one entry is made in the stack memory which is called the Stack Frame. When the
method call is complete, the Stack Frame is destroyed.
Local Variables – Each frame contains an array of variables known as its local variables. All local variables
and their values are stored here. The length of this array is determined at compile-time.
Operand Stack – Each frame contains a last-in-first-out (LIFO) stack known as its operand stack. This acts
as a runtime workspace to perform any intermediate operations. The maximum depth of this stack is
determined at compile-time.
Frame Data – All symbols corresponding to the method are stored here. This also stores the catch block
information in case of exceptions.
Note: Since the Stack Area is not shared, it is inherently thread safe.
Native Area:
The JVM contains stacks that support native methods. These methods are written in a language other
than the Java, such as C and C++. For every new thread, a separate native method stack is also allocated
Program Counter (PC) Registers: The JVM supports multiple threads at the same time. Each thread has
its own PC Register to hold the address of the currently executing JVM instruction. Once the instruction
is executed, the PC register is updated with the next instruction.
Tenured/Old Generation - VM moves objects that live long enough in the survivor spaces to the
"tenured" space in the old generation. When the tenured generation fills up, there is a full GC that is
often much slower because it involves all live objects.
3.Execution Engine: Once the bytecode has been loaded into the main memory, and details are
available in the runtime data area, the next step is to run the program. The Execution Engine handles
this by executing the code present in each class.
JIT Compiler
The JIT Compiler overcomes the disadvantage of the interpreter. The Execution Engine first uses the
interpreter to execute the byte code, but when it finds some repeated code, it uses the JIT compiler.
The JIT compiler then compiles the entire bytecode and changes it to native machine code. This native
machine code is used directly for repeated method calls, which improves the performance of the
system.
What is PermGen ?
Short form for Permanent Generation, PermGen is the memory area in Heap that is used by the JVM
to store class and method objects. If your application loads lots of classes, PermGen utilization will be
high. PermGen also holds ‘interned’ Strings
The size of the PermGen space is configured by the Java command line option -XX:MaxPermSize
Typically 256 MB should be more than enough of PermGen space for most of the applications
However, It is not unusal to see the error “java.lang.OutOfMemoryError: PermGen space“ if you are
loading unusual number of classes.
Gone are the days of OutOfMemory Errors due to PermGen space. With Java 8, there is NO PermGen.
That’s right. So no more OutOfMemory Errors due to PermGen
The key difference between PermGen and Metaspace is this: while PermGen is part of Java Heap
(Maximum size configured by -Xmx option), Metaspace is NOT part of Heap. Rather Metaspace is part
of Native Memory (process memory) which is only limited by the Host Operating System.
While you will NOT run out of PermGen space anymore (since there is NO PermGen), you may consume
excessive Native memory making the total process size large. The issue is, if your application loads lots
of classes (and/or interned strings), you may actually bring down the Entire Server (not just your
application). Why ? Because the native memory is only limited by the Operating System. This means you
can literally take up all the memory on the Server. Not good.
It is critical that you add the new option -XX:MaxMetaspaceSize which sets the Maximum Metaspace
size for your application.
Garbage Collector:
The Garbage Collector (GC) collects and removes unreferenced objects from the heap area. It is the
process of reclaiming the runtime unused memory automatically by destroying them.
Garbage collection makes Java memory efficient because because it removes the unreferenced objects
from heap memory and makes free space for new objects. It involves two phases:
Sweep - in this step, the GC removes the objects identified during the previous phase
Garbage Collections is done automatically by the JVM at regular intervals and does not need to be
handled separately. It can also be triggered by calling System.gc(), but the execution is not guaranteed.
Serial GC - This is the simplest implementation of GC, and is designed for small applications running on
single-threaded environments. It uses a single thread for garbage collection. When it runs, it leads to a
"stop the world" event where the entire application is paused. The JVM argument to use Serial Garbage
Collector is -XX:+UseSerialGC
Parallel GC - This is the default implementation of GC in the JVM, and is also known as Throughput
Collector. It uses multiple threads for garbage collection, but still pauses the application when running.
The JVM argument to use Parallel Garbage Collector is -XX:+UseParallelGC.
Garbage First (G1) GC - G1GC was designed for multi-threaded applications that have a large heap size
available (more than 4GB). It partitions the heap into a set of equal size regions, and uses multiple
threads to scan them. G1GC identifies the regions with the most garbage and performs garbage
collection on that region first. The JVM argument to use G1 Garbage Collector is -XX:+UseG1GC
Note: There is another type of garbage collector called Concurrent Mark Sweep (CMS) GC. However, it
has been deprecated since Java 9 and completely removed in Java 14 in favour of G1GC.
Invoking System.gc() may have significant performance side effects on the application. GC by its design is
intelligent piece of code and it knows when to invoke partial or full collection. Whenever there is a need
it tries
1 http://docs.oracle.com/javase/7/docs/technotes/guides/management/jconsole.html
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
continued on 3535 Chapter - Concepts Cracking Java Interviews (Java 8, Hibernate & Spring)
to collect the space from Young Generation First (very low performance overhead), but when we force
our JVM to invoke System.gc(), JVM will do a Full GC which might pause your application for certain
amount of time, isn't that a bad approach then ? Let GC decide its timing.
Code Cache (non-heap): The HotSpot Java VM also includes a code cache, containing memory that is
used for compilation and storage of native code.
Common JVM Errors:
ClassNotFoundExcecption - This occurs when the Class Loader is trying to load classes using
Class.forName(), ClassLoader.loadClass() or ClassLoader.findSystemClass() but no definition for the class
with the specified name is found.
NoClassDefFoundError - This occurs when a compiler has successfully compiled the class, but the Class
Loader is not able to locate the class file at the runtime.
OutOfMemoryError - This occurs when the JVM cannot allocate an object because it is out of memory,
and no more memory could be made available by the garbage collector.
StackOverflowError - This occurs if the JVM runs out of space while creating new stack frames while
processing a thread.
OOPS:
Inheritance
Encapsulation
Abstraction
Polymorphism
https://www.geektrust.in/blog/2021/03/26/oops-and-clean-code-part-1/
https://www.geektrust.in/blog/2021/03/29/oops-clean-code-part-2/
Inheritance:
Not all classes and objects are unique in terms of their properties and methods.
More often than not, you can easily derive a few properties and methods as the common denominators
from a few classes.
These common properties and methods could be separated out and placed in a common class. All other
classes that need these properties and methods can then “extend” these from that common class. In
object oriented programming, this common class is called the base class or the parent class, and the
classes that extend this base class are known as derived classes, children classes,
Encapsulation:
Using encapsulation we can also hide the class data members (variables) from the other classes.
class Salaray{
public Salaray(){
this.amt*(per/100);
public incrementByPerc(){
Abstraction:
In some of your applications, you will encounter functionalities that a few classes have in common.
But at the same time, even though the functionality is the same, the implementation is not.
Abstract class:
Interface:
They will not maintain the state. Only public abstract methods. No constructor.
In Java8 in Interfaces Default methods introduced. After Interface is released we can still add Default
methods so that client code will not break.
In Java8 in Interfaces static methods introduced. Static methods can act as utility methods will not
preserve the state.
Prefer Interfaces over Abstract classes.
Polymorphism:
As the name suggests, is a principle that allows us to declare multiple methods with the same name but
different implementations.
There are two types here – static polymorphism and dynamic polymorphism.
Java is strictly pass by value. In case of primitive it will do a copy of variable and in case of Reference it
will create a copy of reference
There are seven qualities to be satisfied for a programming language to be pure Object Oriented. They
are:
Encapsulation/Data Hiding
Inheritance
Polymorphism
Abstraction
All operations performed on objects must be only through methods exposed at the objects. -->java not
follows String str="A"+"B";
Enum:
private Genre(){}
2.Brittle
you increase compile-time checking and avoid errors from passing in invalid constants, and you
document which values are legal to use.
enum Color
{
RED, GREEN, BLUE;
}
class Color
{
public static final Color RED = new Color();
public static final Color BLUE = new Color();
public static final Color GREEN = new Color();
}*/
Every enum constant represents an object of type enum.
enum GeneralInformation{
NAME;
}
20.JUNIT5-->https://kheri.net/mockito2-tutorial/
Threads:
}
}
Thread.sleep(30000);
//shut down the pool
executorPool.shutdown();
//shut down the monitor thread
Thread.sleep(5000);
monitor.shutdown();
}
}
Notice that while initializing the ThreadPoolExecutor, we are keeping initial pool size as 2, maximum
pool size to 4 and work queue size as 2. So if there are 4 running tasks and more tasks are submitted,
the work queue will hold only 2 of them and the rest of them will be handled by
RejectedExecutionHandlerImpl. Here is the output of the above program that confirms the above
statement.
The main difference between Executor, ExecutorService, and Executors class is that Executor is the core
interface which is an abstraction for parallel execution. It separates tasks from execution, this is
different from java.lang.Thread class which combines both task and its execution.
ExecutorService is an extension of the Executor interface and provides a facility for returning a Future
object and terminate, or shut down the thread pool. Once the shutdown is called, the thread pool will
not accept new tasks but complete any pending task. It also provides a submit() method which
extends Executor.execute() method and returns a Future.
The Future object provides the facility of asynchronous execution, which means you don't need to wait
until the execution finishes, you can just submit the task and go around, come back and check if the
Future object has the result, if the execution is completed then it would have a result which you can
access by using the Future.get() method. Just remember that this method is a blocking method i.e. it will
wait until execution finish and the result is available if it's not finished already.
By using the Future object returned by ExecutorService.submit() method, you can also cancel the
execution if you are not interested anymore. It provides a cancel() method to cancel any pending
execution.
Another important difference between ExecutorService and Executor is that Executor defines execute()
method which accepts an object of the Runnable interface, while submit() method can accept objects of
both Runnable and Callable interfaces.
Executors is a utility class similar to Collections, which provides factory methods to create different
types of thread pools
shutdown() - when shutdown() method is called on an executor service, it stops accepting new tasks,
waits for previously submitted tasks to execute, and then terminates the executor.
shutdownNow() - this method interrupts the running task and shuts down the executor immediately.
Executor and ExecutorService are the main Interfaces. ExecutorService can execute Runnable and
Callable tasks.
Executor interface has execute method. ExecutorService has submit(), invokeAny() and invokeAll().
Executors utility/Factory class that has Methods that create and return an ExecutorService.
Runnable r=
()->{
System.out.println("test");
};
ExecutorService es=Executors.newFixedThreadPool(10);
s.execute(r);//s.submit(r);
Callable<String> c=()->{
System.out.println("Callabe test");
return "test";
};
List<Callable<String>> al=new ArrayList<>();
al.add(c);
al.add(c);
es.invokeAll(al);
String result = executorService.invokeAny(callableTasks);
CountDownLatch is a construct that a thread waits on while other threads count down on the latch until
it reaches zero. You use the Future to obtain a result from a submitted Callable, and you use a
CountDownLatch when you want to be notified when all threads have completed -- the two are not
directly related, and you use one, the other or both together when and where needed.
CyclicBarrier is a synchronizer that allows a set of threads to wait for each other to reach a common
execution point, also called a barrier.
CyclicBarriers are used in programs in which we have a fixed number of threads that must wait for each
other to reach a common point before continuing execution.
The barrier is called cyclic because it can be re-used after the waiting threads are released.
CyclicBarier waits for certain number of threads while CountDownLatch waits for certain number of
events (one thread could call CountDownLatch.countDown() several times). CountDownLatch cannot be
reused once opened. Also the thread which calls CountDownLatch.countDown() only signals that it
finished some stuff. It doesn't block (and in CyclicBarrier.await() is blocking method) and could continue
to do some other stuff.
The CyclicBarrier uses an all-or-none breakage model for failed synchronization attempts: If a thread
leaves a barrier point prematurely because of interruption, failure, or timeout, all other threads waiting
at that barrier point will also leave abnormally via BrokenBarrierException (or InterruptedException if
they too were interrupted at about the same time).
@Override
public void run() {
String thisThreadName = Thread.currentThread().getName();
List<Integer> partialResult = new ArrayList<>();
partialResults.add(partialResult);
try {
System.out.println(thisThreadName
+ " waiting for others to reach
barrier.");
cyclicBarrier.await();
} catch (InterruptedException e) {
// ...
} catch (BrokenBarrierException e) {
// ...
}
}
}
// Previous code
Java 8 introduced the CompletableFuture class. Along with the Future interface, it also implemented the
CompletionStage interface.
This interface defines the contract for an asynchronous computation step that we can combine with
other steps.
Combining Futures
The best part of the CompletableFuture API is the ability to combine CompletableFuture instances in a
chain of computation steps.
If you want to run some background task asynchronously and don’t want to return anything from the
task, then you can use CompletableFuture.runAsync() method. It takes a Runnable object and returns
CompletableFuture<Void>.
CompletableFuture.runAsync() is useful for tasks that don’t return anything. But what if you want to
return some result from your background task?
2.CompletableFuture<String> completableFuture
= CompletableFuture.supplyAsync(() -> "Hello")
.thenCompose(s -> CompletableFuture.supplyAsync(() -> s + " World"));
3.CompletableFuture<OrchestrationContext> completableFuture =
CompletableFuture.supplyAsync(() -> {
return
prepareWorkflow(initialContext);
}).thenApply(stage1Context -> {
return performObjectCreation(stage1Context);
The CompletableFuture.get() method is blocking. It waits until the Future is completed and returns the
result after its completion.
But, that’s not what we want right? For building asynchronous systems we should be able to attach a
callback to the CompletableFuture which should automatically get called when the Future completes.
That way, we won’t need to wait for the result, and we can write the logic that needs to be executed
after the completion of the Future inside our callback function.
thenApply: You can use thenApply() method to process and transform the result of a
CompletableFuture when it arrives. It takes a Function<T,R> as an argument. Function<T,R> is a simple
functional interface representing a function that accepts an argument of type T and produces a result of
type R –
// Create a CompletableFuture
CompletableFuture<String> whatsYourNameFuture = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return "Rajeev";
});
thenApply():
You can also write a sequence of transformations on the CompletableFuture by attaching a series of
thenApply() callback methods. The result of one thenApply() method is passed to the next in the series –
CompletableFuture<String> welcomeText = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return "Rajeev";
}).thenApply(name -> {
return "Hello " + name;
}).thenApply(greeting -> {
return greeting + ", Welcome to the CalliCoder Blog";
});
System.out.println(welcomeText.get());
// Prints - Hello Rajeev, Welcome to the CalliCoder Blog
If you don’t want to return anything from your callback function and just want to run some piece of
code after the completion of the Future, then you can use thenAccept() and thenRun() methods. These
methods are consumers and are often used as the last callback in the callback chain.
CompletableFuture.supplyAsync(() -> {
return ProductService.getProductDetail(productId);
}).thenAccept(product -> {
System.out.println("Got product detail from remote service " + product.getName())
});
Let’s say that you want to fetch the details of a user from a remote API service and once the user’s detail
is available, you want to fetch his Credit rating from another service.
In earlier examples, the Supplier function passed to thenApply() callback would return a simple value but
in this case, it is returning a CompletableFuture. Therefore, the final result in the above case is a nested
CompletableFuture.
If you want the final result to be a top-level Future, use thenCompose() method instead -
While thenCompose() is used to combine two Futures where one future is dependent on the
other, thenCombine() is used when you want two Futures to run independently and do something after
both are complete.
System.out.println("Retrieving weight.");
CompletableFuture<Double> weightInKgFuture = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return 65.0;
});
System.out.println("Retrieving height.");
CompletableFuture<Double> heightInCmFuture = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return 177.8;
});
System.out.println("Calculating BMI.");
CompletableFuture<Double> combinedFuture = weightInKgFuture
.thenCombine(heightInCmFuture, (weightInKg, heightInCm) -> {
Double heightInMeter = heightInCm/100;
return weightInKg/(heightInMeter*heightInMeter);
});
When we need to execute multiple Futures in parallel, we usually want to wait for all of them to execute
and then process their combined results.
The CompletableFuture.allOf static method allows to wait for the completion of all of the Futures
provided as a var-arg:
CompletableFuture<String> future1
= CompletableFuture.supplyAsync(() -> "Hello");
CompletableFuture<String> future2
= CompletableFuture.supplyAsync(() -> "Beautiful");
CompletableFuture<String> future3
= CompletableFuture.supplyAsync(() -> "World");
CompletableFuture<Void> combinedFuture
= CompletableFuture.allOf(future1, future2, future3);
// ...
combinedFuture.get();
Error Handling:
CompletableFuture<String> completableFuture
= CompletableFuture.supplyAsync(() -> {
if (name == null) {
throw new RuntimeException("Computation error!");
}
return "Hello, " + name;
}).handle((s, t) -> s != null ? s : "Hello, Stranger!");
assertEquals("Hello, Stranger!", completableFuture.get());
Fork-join framework:java7
Executor Framework was enhanced to support fork-join tasks, which will run by a special kind of
executor service known as a fork-join pool.
To achieve Data parallelism: task divided into multiple subtasks until it reaches its least possible size and
execute those tasks in parallel.
Work-Stealing Algorithm: Simply put, free threads try to “steal” work from deques of busy threads.
ForkJoinTask<V>
ForkJoinTask is the base type for tasks executed inside ForkJoinPool. In practice, one of its two
subclasses should be extended: the RecursiveAction for void tasks and the RecursiveTask<V> for tasks
that return a value. They both have an abstract method compute() in which the task’s logic is defined.
In this example, we use an array stored in the arr field of the CustomRecursiveTask class to represent the
work. The createSubtasks() method recursively divides the task into smaller pieces of work until each
piece is smaller than the threshold. Then the invokeAll() method submits the subtasks to the common
pool and returns a list of Future.
@Override
protected Integer compute() {
if (arr.length > THRESHOLD) {
return ForkJoinTask.invokeAll(createSubtasks())
.stream()
.mapToInt(ForkJoinTask::join)
.sum();
} else {
return processing(arr);
}
}
the example splits the task if workload.length() is larger than a specified threshold using
the createSubtask() method.
The String is recursively divided into substrings, creating CustomRecursiveTask instances that are based
on these substrings.
@Override
protected void compute() {
if (workload.length() > THRESHOLD) {
ForkJoinTask.invokeAll(createSubtasks());
} else {
processing(workload);
}
}
subtasks.add(new CustomRecursiveAction(partOne));
subtasks.add(new CustomRecursiveAction(partTwo));
return subtasks;
}
With ForkJoinPool’s constructors, we can create a custom thread pool with a specific level of parallelism,
thread factory and exception handler. Here the pool has a parallelism level of 2. This means that pool
will use two processor cores.
ParalellStreams:
Split→Spliterator
Execute→ForkJoin
Combine→Collect
synchronized keyword doesn't support fairness. Any thread can acquire lock once released, no
preference can be specified,
on the other hand you can make ReentrantLock fair by specifying fairness property, while creating
instance of ReentrantLock. Fairness property provides lock to longest waiting thread, in case of
contention.
ReentrantLock provides convenient tryLock() method, which acquires lock only if its available
or not held by any other thread. This reduce blocking of thread waiting for lock in Java application.
3) One more worth noting difference between ReentrantLock and synchronized keyword in Java is,
for an indefinite period of time and there was no way to control that.
ReentrantLock provides a method called lockInterruptibly(), which can be used to interrupt thread
when it is waiting for lock.
Similarly tryLock() with timeout can be used to timeout if lock is not available in certain time period.
4) ReentrantLock also provides convenient method to get List of all threads waiting for lock.
Volatile: volatile is a keyword. volatile forces all threads to get latest value of the variable from main
memory instead of cache.
No locking is required to access volatile variables. All threads can access volatile variable value at same
time.
Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile
variable establishes
This means that changes to a volatile variable are always visible to other threads.
What's more, it also means that when a thread reads a volatile variable,
it sees not just the latest change to the volatile, but also the side effects of the code that led up the
change.
When to use: One thread modifies the data and other threads have to read latest value of data.
Other threads will take some action but they won't update data.
https://www.callicoder.com/java-locks-and-atomic-variables-tutorial/
https://www.callicoder.com/java-concurrency-issues-and-thread-synchronization/
AtomicXXX:
AtomicXXX classes support lock-free thread-safe programming on single variables. These AtomicXXX
classes (like AtomicInteger)
resolves memory inconsistency errors / side effects of modification of volatile variables, which have
been accessed in multiple threads.
synchronized:
synchronized is keyword used to guard a method or code block. By making method as synchronized has
two effects:
First, it is not possible for two invocations of synchronized methods on the same object to interleave.
When one thread is executing a synchronized method for an object, all other threads that invoke
synchronized methods for the same object
block (suspend execution) until the first thread is done with the object.
subsequent invocation of a synchronized method for the same object. This guarantees that changes to
the state of the object
When to use: Multiple threads can read and modify data. Your business logic not only update the data
but
AtomicXXX extends volatile variables + compareAndSet methods but does not use synchronization.
TDD Vs BDD Vs ATDD
The simple concept of TDD is to write and correct the failed tests before writing new code (before
development)
BDD gives a clearer understanding as to what the system should do from the perspective of the
developer and the customer.
TDD allows a good and robust design, still, your tests can be very far away of the users requirements.
BDD is a way to ensure consistency between requirements and the developer tests.
It is a way of logically grouping classes that are only used in one place: If a class is useful to only one
other class, then it is logical to embed it in that class and keep the two together. Nesting such "helper
classes" makes their package more streamlined.
It increases encapsulation: Consider two top-level classes, A and B, where B needs access to members of
A that would otherwise be declared private. By hiding class B within class A, A's members can be
declared private and B can access them. In addition, B itself can be hidden from the outside world.
It can lead to more readable and maintainable code: Nesting small classes within top-level classes places
the code closer to where it is used.
Non-static nested classes (Inner classes):Inner class can access both static and non-static members of
the Outer class een if they are private.
Local classes:
Local classes are a special type of inner classes – in which the class is defined inside a method or scope
block.
They have access to both static and non-static members in the enclosing context
class Local {
void run() {
// method implementation
local.run();
Anonymous classes:
They are inner classes with no name. Since they have no name, we can't use them in order to create
instances of anonymous classes.
As a result, we have to declare and instantiate anonymous classes in a single expression at the point of
use.
Extending a class:
@Override
we create anonymous class instances at the same moment as we declare them.from anonymous class
instances,
Implementing an interface:
int count = 1;
@Override
public void run() {
};
-----
uses:
button.addActionListener(new ActionListener() {
...
java references:
Strong/Hard reference: The object can't be garbage collected if it's reachable through any strong
reference.
Weak Reference:
If you only have weak references to an object (with no strong references), then the object will be
reclaimed by GC in the very next GC cycle.
Soft Reference:
If you only have soft references to an object (with no strong references), then the object will be
reclaimed by GC only when JVM runs out of memory.
Phantom Reference: The objects which are being referenced by phantom references are eligible for
garbage collection.
But, before removing them from the memory, JVM puts them in a queue called ‘reference queue’.
They are put in a reference queue after calling finalize() method on them
Asynchronous: Asynchronous literally means not synchronous. Email is asynchronous. You send a mail,
you don't expect to get a response NOW. But it is not non-blocking. Essentially what it means is an
architecture where "components" send messages to each other without expecting a response
immediately. HTTP requests are synchronous. Send a request and get a response.
Non-Blocking: This term is mostly used with IO. What this means is that when you make a system call, it
will return immediately with whatever result it has without putting your thread to sleep (with high
probability). For example non-blocking read/write calls return with whatever they can do and expect
caller to execute the call again. try_lock for example is non-blocking call. It will lock only if lock can be
acquired. Usual semantics for systems calls is blocking. read will wait until it has some data and put
calling thread to sleep.
Thread-per-Request Programming:
Here steps 1 and 3 are I/O tasks and 2 is a computation task. Above 3 steps are synchronous blocking
calls. That is step 2 can not be done when the step 1 is not completed and step 3 can not be done until
step 2 is completed. Lets assume that the above DB query is a multi-table join query which will take
some time to execute. Lets also assume that We could receive hundreds of concurrent requests to do
the above task.
We normally encapsulate all the steps to be performed by creating an object. When we receive
concurrent requests, in the traditional programming model, we create a thread and create an object for
the request; and the thread executes the steps using the object. the thread will perform the steps. If we
receive hundreds of requests, then we create hundreds of threads. The problem here is – threads
consume memory! That is, Java has to allocate a procedure stack with considerable amount of memory
to the thread for the execution. While each and every thread is performing the IO tasks, it has to wait
for the step to be completed before proceeding with next steps. For ex: The thread has to wait for the
DB server to execute the query and returns the result. As thread consumes memory and if it waits, then
it is not practical and scalable. Then it sets a limit to number of threads we can create for the
application.
Event-Driven Programming:
The event-driven programming is based on asynchronous procedure call. That is, we split the above 1
big synchronous task into 3 synchronous simple tasks (ie each step becomes a task). All these tasks are
queued. All these tasks are read from a queue one by one and executed using dedicated worker threads
from a thread-pool. When there are no tasks in queue, worker threads would simply wait for the tasks
to arrive. Usually number of worker threads would be small enough to match the number of available
processors. This way, You can have 10000 tasks in the queue and process them all efficiently – but we
can not create 10000 threads in a single machine.
So the event-driven programming model is NOT going to increase the performance (up to certain limit)
or overall the response time could be still same. But it can process more number of concurrent requests
efficiently!
Reactive Programming:
1.Sync + Blocking
2.Async
3.non- blocking
In reactive programming, threads are not blocked or waiting for a request to complete. Instead they are
notified when the request is complete / the data changes. Till then they can do other tasks. This makes
us to use less resources to serve more requests.
Reactive programming is based on Observer design pattern. It has following interfaces / components.
• Publisher / Observable
• Observer / Subscriber
• Subscription
• Processor / Operators
Publisher:
Publisher is observable whom we are interested in listening to! These are the data sources or streams.
Observer:
Observer subscribes to Observable/Publisher. Observer reacts to the data emitted by the Publisher.
Publisher pushes the data to the Observers. Publishers are read-only whereas Observers are write-only.
Subscription:
Observer subscribes to Observable/Publisher via an object called Subscription. Publisher’s emitting rate
might be higher than Observer’s processing rate. So in that case, Observer might send some feedback to
the Publisher how it wants the data / what needs to be done when the publisher’s emitting rate is high
via Subscription object.
For example:
We keep the volume up or down based on the current volume when we listen to music.
We slow down the speed of a talk in YouTube if the speaker is too fast.
Processor:
Processor acts as both Publisher and Observer. They stay in between a Publisher and an Observer. It
consumes the messages from a publisher, manipulates it and sends the processed message to its
subscribers. They can be used to chain multiple processors in between a Publisher or an Observer.
Reactive Stream is a specification which specifies the above 4 interfaces to standardize the programming
libraries for Java. Some of the implementations are
• Project Reactor
• RxJava2
• Akka
2.Non-blocking
3.Fcuntional Style/Declarative.
Schedulers:
Other than Observable and Observer, another key point in Reactive programming is Schedulers! Behind
the scenes, asynchronous behavior is achieved using threads. Reactive libraries hide the complexity and
provide rich set of APIs to manage threads for Observables and Observers. It has 2 important methods.
subscribeOn – specifies the threads on which observable should operate or the thread pool to be used
by the source to emit the data.
publishOn – specifies the threads on which observers should operate. Any publishOn affects the thread
pools used by the subsequent observers.
Publisher Types:
There are 2 types of Observables / Publishers
Cold Publisher
This is lazy
Publisher creates a data producer and generates new sets of values for each new subscription
When there are multiple observers, each observer might get different values
Example: Netflix. Movie will start streaming only if the subscriber wants to watch. Each subscriber can
watch a movie any time from the beginning
Hot Publisher
Values are generated outside the publisher even when there are no observers.
All the observers get the value from the single data producer irrespective of the time they started
subscribing to the publisher. It means any new observer might not see the old value emitted by the
publisher.
Example: Radio Stream. Listeners will start listening to the song currently playing. It might not be from
the beginning.
2.Flux→emits 0 or n items
The core difference is that Reactive is a push model, whereas the Java 8 Streams are a pull model. In a
reactive approach, events are pushed to the subscribers as they come in.
Flux:
Flux is an implementation of Publisher. It will emit 0 . . . N elements and/or a complete or an error call.
(Image courtesy: project reactor site). Stream pipeline is synchronous whereas Flux pipeline is
completely asynchronous.
empty →To emit 0 element / or return empty Flux<T>
Flux.empty()
.subscribe(i -> System.out.println("Received : " + i));
//No output
just →The easiest way to emit an element is using the just method.
subscribe method accepts a Consumer<T> where we define what we do with the emitted element.
Flux.just(1)
.subscribe(i -> System.out.println("Received : " + i));
//Output
Received : 1
Unlike stream, Flux can have any number of Observers connected to the pipeline. We can also write like
this if we need to connect more than 1 observers to a source.
1 observer might be collecting all elements into a list while other observer could be logging the element
details.
//Output
Observer-1 : 1
Observer-2 : 1
just with arbitrary elements
//Output
Received : a
Received : b
Received : c
We can have multiple observer and each observer will the process the emitted elements independently.
They might take their own time. Everything happens asynchronously.
The below output shows that the entire pipeline is executed asynchronously by default.
System.out.println("Starts");
System.out.println("Ends");
//Just to block the execution - otherwise the program will end only with start and end
messages//Output
Starts
Ends
Observer-2 : a
Observer-1 : A
Observer-2 : b
Observer-1 : B
Observer-2 : c
Observer-2 : d
Observer-1 : C
Observer-1 : D
In the above code, I added below log method to better understand the behavior.
We have 2 observers subscribed to the source. This is why we have onSubscribe method
request(32) — here 32 is the default buffer size. Observer requests for 32 elements to buffer/emit.
Once all the elements are emitted. complete call is invoked to inform the observers not to expect any
more elements.
The subscribe method could accept other parameters as well to handle the error and completion calls.
So far we have been consuming the elements received via the pipeline. But we could also get some
unhandled exception. We can pass the handlers as shown here.
subscribe(
i -> System.out.println("Received :: " + i),
err -> System.out.println("Error :: " + err),
() -> System.out.println("Successfully completed"))
Lets take this example. We get the below output as expected. Here we simply divide 10 by each
element.
Flux.just(1,2,3)
.map(i -> 10 / i)
.subscribe(
i -> System.out.println("Received :: " + i),
err -> System.out.println("Error :: " + err),
() -> System.out.println("Successfully completed"));
//Output
Received :: 10
Received :: 5
Received :: 3
Successfully completed
Now if we slightly modify our map operation as shown here — we would be doing division by zero which
will throw RunTimeException which is handled so well here without the ugly try/catch block .
Flux.just(1,2,3)
.map(i -> i / (i-2))
.subscribe(
i -> System.out.println("Received :: " + i),
err -> System.out.println("Error :: " + err),
() -> System.out.println("Successfully completed"));
//Output
Received :: -1
Error :: java.lang.ArithmeticException: / by zero
fromArray — when you have array. just should also work here.
String[] arr = {"Hi", "Hello", "How are you"};
Flux.fromArray(arr)
.filter(s -> s.length() > 2)
.subscribe(i -> System.out.println("Received : " + i));
//Output
Received : Hello
Received : How are you
fromIterable — When you have collection of elements and like to pass them via Flux pipeline.
List<String> list = Arrays.asList("vins", "guru");
Flux<String> stringFlux = Flux.fromIterable(list)
.map(String::toUpperCase);
Be careful with Streams!! Flux can have more than 1 observer. But below code will throw error saying
that the stream has been closed.
//observer-1
stringFlux
.map(String::length)
.subscribe(i -> System.out.println("Observer-1 :: " + i));//observer-2
stringFlux
.subscribe(i -> System.out.println("Observer-2 :: " + i));
In all the above options, we already have elements found before emitting. What if we need to keep on
finding and emitting elements programmatically? Flux has 2 additional methods for that. But these 2
methods need a separate article to explain as we need to understand what they are for and when to use
what! Flux.create, Flux.generate
Flux.just(1, 2, 3, 4)
.log()
.subscribe(new Subscriber<Integer>() {
@Override
public void onSubscribe(Subscription s) {
s.request(Long.MAX_VALUE);
}
@Override
public void onNext(Integer integer) {
elements.add(integer);
}
@Override
public void onError(Throwable t) {}
@Override
public void onComplete() {}
});
Backpressure:
Backpressure is when a downstream can tell an upstream to send it less data in order to prevent it from
being overwhelmed.
We can modify our Subscriber implementation to apply backpressure. Let's tell the upstream to only
send two elements at a time by using request():
Flux.just(1, 2, 3, 4)
.log()
.subscribe(new Subscriber<Integer>() {
private Subscription s;
int onNextAmount;
@Override
public void onSubscribe(Subscription s) {
this.s = s;
s.request(2);
}
@Override
public void onNext(Integer integer) {
elements.add(integer);
onNextAmount++;
if (onNextAmount % 2 == 0) {
s.request(2);
}
}
@Override
public void onError(Throwable t) {}
@Override
public void onComplete() {}
});
Combining Two Streams: We can then make things more interesting by combining another stream with
this one. Let's try this by using zip() function:
Flux.just(1, 2, 3, 4)
.log()
.map(i -> i * 2)
.zipWith(Flux.range(0, Integer.MAX_VALUE),
(one, two) -> String.format("First Flux: %d, Second Flux: %d", one, two))
.subscribe(elements::add);
assertThat(elements).containsExactly(
"First Flux: 2, Second Flux: 0",
"First Flux: 4, Second Flux: 1",
"First Flux: 6, Second Flux: 2",
"First Flux: 8, Second Flux: 3");
Hot Streams: For example, we could have a stream of mouse movements that constantly needs to be
reacted to or a Twitter feed. These types of streams are called hot streams, as they are always running
and can be subscribed to at any point in time, missing the start of the data.
ConnectableFlux: One way to create a hot stream is by converting a cold stream into one. Let's create a
Flux that lasts forever, outputting the results to the console, which would simulate an infinite stream of
data coming from an external resource
ConnectableFlux<Object> publish = Flux.create(fluxSink -> {
while(true) {
fluxSink.next(System.currentTimeMillis());
}
})
.publish();
publish.subscribe(System.out::println);
publish.subscribe(System.out::println);
// If we try running this code, nothing will happen. It's not until we call connect(), that
the Flux will start emitting:
publish.connect();
Concurrency: All of our above examples have currently run on the main thread. However, we can
control which thread our code runs on if we want. The Scheduler interface provides an abstraction
around asynchronous code, for which many implementations are provided for us. Let's try subscribing to
a different thread to main:
Flux.just(1, 2, 3, 4)
.log()
.map(i -> i * 2)
.subscribeOn(Schedulers.parallel())
.subscribe(elements::add);
Mono: Mono is an another implementation of Publisher. It emits at most one item and then (optionally)
terminates with an onComplete signal or an onError signal.. (Image courtesy: project reactor site). Like
Flux, Mono is also asynchronous in nature.
just — to emit one single item
Mono.just(1)
.subscribe(System.out::println);
Using Callable/Supplier
fromRunnable — We know that runnable does not accept any parameter and does not return anything
either. So what do you think the below code will do?
The above code would just print “Hello” and nothing else will happen as it is because there is no item to
emit. But if we add the error and complete handler, we get the below output. It is helpful if we need to
be notified when a runnable is completed.
Mono.fromRunnable(() -> System.out.println("Hello"))
.subscribe(
i -> System.out.println("Received :: " + i),
err -> System.out.println("Error :: " + err),
() -> System.out.println("Successfully completed"));
//Output
Hello
Successfully completed
ConcurrentModificationExcetpion:
https://www.java67.com/2015/10/how-to-solve-concurrentmodificationexception-in-java-
arraylist.html
https://www.digitalocean.com/community/tutorials/java-util-concurrentmodificationexception
https://www.java67.com/2015/10/how-to-solve-concurrentmodificationexception-in-java-
arraylist.html#ixzz7xeFnJlKC
https://www.java67.com/2015/10/how-to-solve-concurrentmodificationexception-in-java-
arraylist.html#ixzz7xeD1u24d
Iterator Vs ListIterator:
• The basic difference between Iterator and ListIterator is that both being cursor, Iterator can
traverse elements in a collection only in forward direction. On the other hand, the ListIterator
can traverse in both forward and backward directions.
• Using iterator you can not add any element to a collection. But, by using ListIterator you can add
elements to a collection.
• Using Iterator, you can not remove an element in a collection where, as You can remove an
element from a collection using ListIterator.
• Using Iterator you can traverse all collections like Map, List, Set. But, by ListIteror you can
traverse List implemented objects only.
• You can retrieve an index of an element using Iterator. But as List is sequential and index-based
you can retrieve an index of an element in using ListIterator.
Java8 features:
Lambda:
Lambda Restrictions:
1.Not allowed to use the same local variable name as Lambda parameters
4.Lambdas are allowed to declare local variables, but not allowed to modify even though they are not
declared as final. This concept is Effectively Final. To make sure it is easy for performing concurrency
operations
return localLamdbdaVar;
};
If we try to change the value either in the lambda itself or elsewhere in the enclosing scope, we will get
an error.
Enclosed variables can be used at any place in Lambda but the value can not be changed.
Method references:
2 methods(referring, Referrer) should have same arguments (return type, access modfier(static,non
static) and name can varey)
TEST::m1()-->Static refrence
object::m1()-->method reference
Functional Interfaces:
1. Consumer(forEach)
Consumer<String> r=(a)->{
System.out.println(a);
};
r.accept("hi");
BiConsumer: accepts 2 values.
2. Suppiler (Collectors.toList())
The Supplier functional interface is yet another Function specialization that does not take any
arguments. We typically use it for lazy generation of values.
Supplier<String> s=()->{
return "hello";
};
System.out.println(s.get());
3. Function (map)
Function interface with a method that receives one value and returns another.
BiFunction<Integer,Integer,Integer> f=(a,b)->{
return a+b;
};
System.out.println(f.apply(1, 2));
4.Predicate(filter): a predicate is a function that receives a value and returns a boolean value
BiPredicate will accept two values.
BiPredicate<Integer,Integer> p=(a,b)->{
return a>b;
};
System.out.println(p.test(1, 2));
5. Operators: receive and return the same value type
UnaryOperator-->Input and return always same then go with Unary Operator rather than Function.
UnaryOperator<Integer> o =i->i*i*i
o.apply(10);
Binary Operator will accept 2 values.
BinaryOperator-->if it takes same type of values as input and return type then go with BinaryOperator
BinaryOperator<String> o=(s1,s2)->s1+s2;
To resolve performance issues (Auto Boxing, Unboxing) they have given special Interfaces for Primitives
IntPredicate
IntFunction, ToIntFunction
IntConsumer,LongCOnsumer
IntSupplier
IntUnaryOperator
https://stackify.com/streams-guide-java-8/
2. Dates
LOcalTime -->time.getMinute();
time.getSecond();
LocalDateAndTime time1=LocalDateAndTime.of("19189","12","12");-->For locating perticular
period.
LocalDateAndTime time1=LocalDateAndTime.now();
Period p=Period.between(time1,tim2);
p.getYears();p.getMonths();p.getDays();
Year year=Year.of("1973")
year.isLeaf();
convert util date to LocalDate.
Date d=new Date();
LocalDate ld=d.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
Convert LocalDate to UtilDate
Date d1=
Date.from(ld.atTime(LocalTime.now()).atZone(ZoneId.systemDefault()).toInstant());
String str="13:00";
LocalTime lt=LocalTime.parse(str);
String str1="13*00";
DateTimeFormatter dtf=DateTimeFormatter.ofPattern("HH*mm");
LocalTime lt1=LocalTime.parse(str1,dtf);
4.Streams
Map()→
Flatmap→
5. forEach() vs forEachOrdered()
skip() to skip first 3 elements, and limit() to limit to 5 elements from the infinite stream generated using
iterate().
7.Specialized Operations
8.Reduction Operations
9.joining
Stream.of(“A”,”B”);
infinite streams:
iterate:
generate;
Stream.generate(Math::random)
.limit(5)
.forEach(System.out::println);
mapToObj()→
IntStream.range(1, 2).mapToObj((j)->{
});
//convert to ODD,EVEN,ZERO
IntStream st=IntStream.of(0,1,2,5,7,8);
List<String> al=new ArrayList<String>();
st.forEach(j->{
if(j==0) {
al.add("ZERO");
}else if(j%2==0) {
al.add("EVEN");
}else if(j%2!=0) {
al.add("ODD");
}
});
al.forEach(i->System.out.println(i));
Optional:
Optional<String> op=Optional.ofNullable("Hello");
op.ifPresent(s->System.out.println(s));
if(op.isPresent()) {
op.get();
}
But Java 8 has come with the following new strategy for HashMap objects in case of high collisions.
To address this issue, Java 8 hash elements use balanced trees instead of linked lists after a certain
threshold is reached.
Which means HashMap starts with storing Entry objects in a linked list but after the number of items in
a hash becomes
larger than a certain threshold. The hash will change from using a linked list to a balanced tree.
Above changes ensure the performance of O(log(n)) in worst case scenarios and O(1) with proper
hashCode().
Node<K,V>[] table;
CompletableFuture in Java8:
https://www.callicoder.com/java-8-completablefuture-tutorial/
Java9 features:
List<String> list=Arrays.asList("Hi","Hello");
//it will give unmodifiable View. It will not be having separate storage
List<String> listView =Collections.unmodifiableList(list);
list.set(1, "test");
System.out.println(listView);//Hi Test
Compact Strings:
Compact String is one of the performance enhancements introduced in the JVM as part of JDK 9. Till JDK
8, whenever we create one String object then internally it is represented as char[], which consist the
characters of the String object.
• Till JDK 8, Java represent String object as char[] because every character in java is of 2 bytes
because Java internally uses UTF-16.
• If any String contains a word in the English language then the character can be represented
using a single byte only, we don’t need 2 bytes for each character. Many characters require 2
bytes to represent them but most of the characters require only 1 byte, which falls under LATIN-
1 character set. So, there is a scope to improve memory consumption and performance.
• Java9 introduced the concept of compact Strings. The main purpose of the compact string is
whenever we create a string object and the characters inside the object can be represented
using 1 byte, which is nothing but LATIN-1 representation, then internally java will create one
byte[]. In other cases, if any character requires more than 1 byte to represent it then each
character is stored using 2 bytes i.e. UTF-16 representation.
• Thats how Java developers changed the internal implementation of String i.e. known as
Compact String, which will improve the memory consumption and performance of String.
Modules:
Java 9 introduces a new level of abstraction above packages, formally known as the Java Platform
Module System (JPMS), or “Modules” for short.
A Module is a group of closely related packages and resources along with a new module descriptor file.
When we create a module, we include a descriptor file that defines several aspects of our new module:
• Public Packages – a list of all packages we want accessible from outside the module
• Services Offered – we can provide service implementations that can be consumed by other
modules
• Reflection Permissions – explicitly allows other classes to use reflection to access the private
members of a package
• System Modules – These are the modules listed when we run the list-modules command above.
They include the Java SE and JDK modules.
• Application Modules – These modules are what we usually want to build when we decide to use
Modules. They are named and defined in the compiled module-info.class file included in the
assembled JAR.
• Automatic Modules – We can include unofficial modules by adding existing JAR files to the
module path. The name of the module will be derived from the name of the JAR. Automatic
modules will have full read access to every other module loaded by the path.
• Unnamed Module – When a class or JAR is loaded onto the classpath, but not the module path,
it’s automatically added to the unnamed module. It’s a catch-all module to maintain backward
compatibility with previously-written Java code.
When we create a module, we include a descriptor file that defines several aspects of our new module:
• Name – the name of our module
• Public Packages – a list of all packages we want accessible from outside the module
• Services Offered – we can provide service implementations that can be consumed by other
modules
• Reflection Permissions – explicitly allows other classes to use reflection to access the private
members of a package
Java10 features:
Type inference refers to the automatic detection of the datatype of a variable, done generally at the
compiler time.
Local variable type inference is a feature in Java 10 that allows the developer to skip the type
declaration associated with local variables (those defined inside method definitions, initialization blocks,
for-loops, and other blocks like if-else), and the type is inferred by the JDK. It will, then, be the job of the
compiler to figure out the datatype of the variable.
Note that this feature is available only for local variables with the initializer. It cannot be used for
member variables, method parameters, return types, etc – the initializer is required as without which
compiler won’t be able to infer the type.
There are cases where declaration of local variables using the keyword ‘var’ produces an error
Application Class-Data Sharing, or AppCDS, builds upon the Class-Data Sharing (CDS) feature that has
been part of the Java HotSpot VM since Java 5. Initially, CDS was designed to reduce the startup time of
Java applications by sharing common class metadata across different Java processes. However, this
functionality was limited to the JDK’s system classes.
Java 10 expanded this feature with the introduction of AppCDS, allowing application classes to also be
placed in the shared archive. This had the dual effect of improving startup time and reducing the
memory footprint of Java applications.
The Benefits:
By sharing common class metadata between different Java processes and placing application classes
into the shared archive, the overhead of class loading is significantly reduced. This directly improves the
startup time of applications, which is crucial for large-scale applications and services that require
frequent restarts or have many short-lived tasks.
Moreover, AppCDS reduces the memory footprint of the JVM. This is especially beneficial in
containerized and cloud environments where resources are shared, and efficiency is critical.
1. Create a list of classes that need to be included in the shared archive. You can generate this list
by running your application with the following JVM argument: -
XX:DumpLoadedClassList=<filename>.
2. Create the shared archive using the list of classes. This is done by using the -Xshare:dump JVM
argument.
Example:
Here’s how you can create the class list and shared archive:
# Step 1: Create the class list java -XX:DumpLoadedClassList=myapp.lst -cp myapp.jar # Step 2: Create
the shared archive java -Xshare:dump -XX:SharedClassListFile=myapp.lst -
XX:SharedArchiveFile=myapp.jsa -cp myapp.jar
Java11 features:
String name="Fayaz";
System.out.println(name.repeat(3)); //FayazFayazFayaz
The new HTTP client from the java.net.http package was introduced in Java 9. It has now become a
standard feature in Java 11.
The new HTTP API improves overall performance and provides support for both HTTP/1.1 and HTTP/2:
indent adjusts the indentation of each line based on the integer parameter. If the parameter is greater
than zero, new spaces will be inserted at the beginning of each line. On the other hand, if the parameter
is less than zero, it removes spaces from the begging of each line. If a given line does not contain
sufficient white space, then all leading white space characters are removed.
Switch Expressions:
switch statements are not only more compact and readable. They also remove the need for break
statements. The code execution will not fall through after the first match.
Another notable difference is that we can assign a switch statement directly to the variable. It was not
possible previously. It’s also possible to execute code in switch expressions without returning any value:
DayOfWeek dayOfWeek = LocalDate.now().getDayOfWeek();
String typeOfDay = "";
typeOfDay = switch (dayOfWeek) {
case MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY -> "Working Day";
case SATURDAY, SUNDAY -> "Day Off";
};
System.out.println(typeOfDay);
Text Blocks:
With Java13:
Using yield, we can now effectively return values from a switch expression:
var me = 4;
var operation = "squareMe";
var result = switch (operation) {
case "doubleMe" -> {
yield me * 2;
}
case "squareMe" -> {
yield me * me;
}
default -> me;
};
System.out.println(result);
Java14 features:
1.switch expressions have been standardized so that they are part and parcel of the development kit
System.out.println(isTodayHoliday);
• \: to indicate the end of the line, so that a new line character is not introduced
3. Records were introduced to reduce repetitive boilerplate code in data model POJOs. They simplify
day to day development, improve efficiency and greatly minimize the risk of human error.
As we can see, we are making use of a new keyword, record, here. This simple declaration will
automatically add a constructor, getters, equals, hashCode and toString methods for us.
Java15 features:
Sealed classes:
The goal of sealed classes is to allow individual classes to declare which types may be used as sub-types
This also applies to interfaces and determining which types can implement them.
we can only restrict a class from being extended using final keyword. Sealed classes can control which
classes can extend it by including them in the permitted list.
public abstract sealed class Person permits Employee, Manager { //... }
In this example, Shape is a sealed class, and Circle and Square are its permitted subclasses.
It’s important to note that any class that extends a sealed class must itself be declared sealed, non-
sealed, or final. This ensures the class hierarchy remains finite and known by the compiler.
This finite and exhaustive hierarchy is one of the great benefits of using sealed classes.
• sealed – meaning they must define what classes are permitted to inherit from it using
the permits keyword.
Alternatively, If we define permitted subclasses in the same file as the sealed class, then we can omit the
‘permits’ clause.
Java16 features:
1.Sealed classes, first introduced in Java 15, provide a mechanism to determine which sub-classes are
allowed to extend or implement a parent class or interface.
There are a few additions to sealed classes in Java 16. These are the changes that Java 16 introduces to
the sealed class:
• The Java language recognizes sealed, non-sealed, and permits as contextual keywords (similar to
abstract and extends)
• Restrict the ability to create local classes that are subclasses of a sealed class (similar to the
inability to create anonymous classes of sealed classes).
• Stricter checks when casting sealed classes and classes derived from sealed classes
With the release of Java 16, we can now define records as class members of inner classes. This is due to
relaxing restrictions that were missed as part of the incremental release of Java 15
class OuterClass {
class InnerClass {
Book book = new Book("Title", "author", "isbn");
}
}
Java18 features:
The switch expression requires all possible values to be handled in the switch block, else prompts a
compile-time error.
The below code is fine because the default will handle all the possible types.
This Foreign Function & Memory API allows the developer to access the code outside the JVM (foreign
functions), data stored outside the JVM (off-heap data), and accessing memory not managed by the JVM
(foreign memory).
In Java 18, if the file.encoding system property is COMPACT, the JVM uses Java 17 and an earlier
algorithm to choose the default charset.
In Java 18, this JEP makes the default charset to UTF-8. However, we still allow configuring the default
charset to others by providing the system property ‘file.encoding’.
Java19 features:
2.Virtual Threads:
This JEP introduces virtual threads, a lightweight implementation of threads provided by the JDK instead
of the OS. The number of virtual threads can be much larger than the number of OS threads. These
virtual threads help increase the throughput of the concurrent applications.
In Java, every instance of java.lang.Thread is a platform thread that runs Java code on an underlying OS
thread. The number of platform threads is limited to the number of OS threads, like in the above case
study.
A Virtual Thread is also an instance of java.lang.Thread, but it runs Java code on the same OS thread and
shares it effectively, which makes the number of virtual threads can be much larger than the number of
OS threads.
Because the number of OS threads does not limit virtual threads, we can quickly increase the concurrent
requests to achieve higher throughput.
Java 20 features:
1.Added support for record patterns to be usable in the header of an enhanced for loop.
withLock(connection, () -> {
// Execute the code block here.
});
Java 21 features:
1.String Templates:
// As of Java 21
String productName = "Widget";
double productPrice = 29.99;
boolean productAvailable = true;
System.out.println(productInfo);
2. Sequenced Collections
3. Structured concurrency:
The structured concurrency feature aims to simplify Java concurrent programs by treating multiple tasks
running in different threads (forked from the same parent thread) as a single unit of work. Treating all
such child threads as a single unit will help in managing all threads as a unit; thus, canceling and error
handling can be done more reliably.
In structured multi-threaded code, if a task splits into concurrent subtasks, they all return to the same
place i.e., the task’s code block. This way, the lifetime of a concurrent subtask is confined to that
syntactic block.
In this approach, subtasks work on behalf of a task that awaits their results and monitors them for
failures. At run time, structured concurrency builds a tree-shaped hierarchy of tasks, with sibling
subtasks being owned by the same parent task. This tree can be viewed as the concurrent counterpart
to the call stack of a single thread with multiple method calls.
try (var scope = new StructuredTaskScope.ShutdownOnFailure()()) {
Future<AccountDetails> accountDetailsFuture = scope.fork(() -> getAccountDetails(id));
Future<LinkedAccounts> linkedAccountsFuture = scope.fork(() -> fetchLinkedAccounts(id));
Future<DemographicData> userDetailsFuture = scope.fork(() -> fetchUserDetails(id));
scope.join(); // Join all subtasks
scope.throwIfFailed(e -> new WebApplicationException(e));
//The subtasks have completed by now so process the result
return new Response(accountDetailsFuture.resultNow(),
linkedAccountsFuture.resultNow(),
userDetailsFuture.resultNow());
}
unnamed/unused variables:
It is common in some other programming languages (such as Scala and Python) that we can skip naming
a variable that we will not use in the future. Now, since Java 21, we can use the unnamed/unused
variables in Java as well.
Unused variables:
String s = “hello”;
try {
int i = Integer.parseInt(s);
//use i
} catch (NumberFormatException _) {
System.out.println("Invalid number: " + s);
}
System.out.println(result);
//unnamed variables
public void print(Object o) {
switch (o) {
References:
https://projectreactor.io/docs/core/release/reference/
https://www.baeldung.com/java-reactor-flux-vs-mono
https://vinsguru.medium.com/java-reactive-programming-flux-vs-mono-c94316b55f36
https://www.programmr.com/blogs/difference-between-asynchronous-and-non-blocking
https://vinsguru.medium.com/java-reactive-programming-schedulers-359b5918aadd
https://vinsguru.medium.com/java-reactive-programming-flux-create-vs-flux-generate-38a23eb8c053
https://vinsguru.medium.com/java-reactive-programming-flux-vs-mono-c94316b55f36
https://docs.oracle.com/javase/specs/jvms/se8/html/jvms-2.html#jvms-2.5.4
https://www.linkedin.com/pulse/java-virtual-machine-changes-78-9-kunal-saxena
https://www.freecodecamp.org/news/jvm-tutorial-java-virtual-machine-architecture-explained-for-
beginners/
http://karunsubramanian.com/websphere/one-important-change-in-memory-management-in-java-8/
https://stackify.com/streams-guide-java-8/
https://mkyong.com/java/what-is-new-in-java-19/