0% found this document useful (0 votes)
12 views75 pages

Advanced Core Java

Uploaded by

sasith.wickrama
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
12 views75 pages

Advanced Core Java

Uploaded by

sasith.wickrama
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 75

Advanced Core Java

By Fayaz
JVM Architecture:

Languages like JavaScript and Python, the computer executes the instructions directly without having to
compile them. These languages are called interpreted languages.

Java uses a combination of both techniques. Java code is first compiled into byte code to generate a
class file. This class file is then interpreted by the Java Virtual Machine for the underlying platform. The
same class file can be executed on any version of JVM running on any platform and operating system.

The JVM consists of three distinct components:

1. Class Loader
2. Runtime Memory/Data Area
3. Execution Engine

1.Class Loaders:

There are three phases in the class loading process: loading, linking, and initialization.
Loading

Loading involves taking the binary representation (bytecode) of a class or interface with a particular
name, and generating the original class or interface from that.

There are three built-in class loaders available in Java:

Bootstrap Class Loader - This is the root class loader. It is the superclass of Extension Class Loader and
loads the standard Java packages like java.lang, java.net, java.util, java.io, and so on. These packages are
present inside the rt.jar file and other core libraries present in the $JAVA_HOME/jre/lib directory.

Extension Class Loader - This is the subclass of the Bootstrap Class Loader and the superclass of the
Application Class Loader. This loads the extensions of standard Java libraries which are present in the
$JAVA_HOME/jre/lib/ext directory.

Application Class Loader - This is the final class loader and the subclass of Extension Class Loader. It
loads the files present on the classpath. By default, the classpath is set to the current directory of the
application. The classpath can also be modified by adding the -classpath or -cp command line option.

The JVM uses the ClassLoader.loadClass() method for loading the class into memory. It tries to load the
class based on a fully qualified name.

If a parent class loader is unable to find a class, it delegates the work to a child class loader. If the last
child class loader isn't able to load the class either, it throws NoClassDefFoundError or
ClassNotFoundException.

Linking

After a class is loaded into memory, it undergoes the linking process. Linking a class or interface involves
combining the different elements and dependencies of the program together.

Linking includes the following steps:


Verification: This phase checks the structural correctness of the .class file by checking it against a set of
constraints or rules. If verification fails for some reason, we get a VerifyException.

For example, if the code has been built using Java 11, but is being run on a system that has Java 8
installed, the verification phase will fail.

Preparation: In this phase, the JVM allocates memory for the static fields of a class or interface, and
initializes them with default values.

For example, assume that you have declared the following variable in your class:

private static final boolean enabled = true;

During the preparation phase, JVM allocates memory for the variable enabled and sets its value to the
default value for a boolean, which is false.

Resolution: In this phase, symbolic references are replaced with direct references present in the runtime
constant pool.

For example, if you have references to other classes or constant variables present in other classes, they
are resolved in this phase and replaced with their actual references.

Initialization

Initialization involves executing the initialization method of the class or interface (known as <clinit>).
This can include calling the class's constructor, executing the static block, and assigning values to all the
static variables. This is the final stage of class loading.

For example, when we declared the following code earlier:

private static final boolean enabled = true;

The variable enabled was set to its default value of false during the preparation phase. In the
initialization phase, this variable is assigned its actual value of true.
Note: the JVM is multi-threaded. It can happen that multiple threads are trying to initialize the same
class at the same time. This can lead to concurrency issues. You need to handle thread safety to ensure
that the program works properly in a multi-threaded environment.

JVM Memory Structure is divided into multiple memory area like heap area, stack area, method area, PC
Registers etc.

Metaspace: Metaspace is a new memory space – starting from the Java 8 version; it has replaced the
older PermGen memory space. The metaspace holds all the reflective data of the virtual machine itself,
such as class metadata, classloader related data. Garbage collection of the dead classes and classloaders
is triggered once the class metadata usage reaches the “MaxMetaspaceSize”.

Heapspace: All the objects and their corresponding instance variables are stored here. This is the run-
time data area from which memory for all class instances and arrays is allocated.

Stack Area:Whenever a new thread is created in the JVM, a separate runtime stack is also created at the
same time. All local variables, method calls, and partial results are stored in the stack area.

If the processing being done in a thread requires a larger stack size than what's available, the JVM
throws a StackOverflowError.

For every method call, one entry is made in the stack memory which is called the Stack Frame. When the
method call is complete, the Stack Frame is destroyed.

The Stack Frame is divided into three sub-parts:

Local Variables – Each frame contains an array of variables known as its local variables. All local variables
and their values are stored here. The length of this array is determined at compile-time.

Operand Stack – Each frame contains a last-in-first-out (LIFO) stack known as its operand stack. This acts
as a runtime workspace to perform any intermediate operations. The maximum depth of this stack is
determined at compile-time.

Frame Data – All symbols corresponding to the method are stored here. This also stores the catch block
information in case of exceptions.

Note: Since the Stack Area is not shared, it is inherently thread safe.

Native Area:

The JVM contains stacks that support native methods. These methods are written in a language other
than the Java, such as C and C++. For every new thread, a separate native method stack is also allocated

Program Counter (PC) Registers: The JVM supports multiple threads at the same time. Each thread has
its own PC Register to hold the address of the currently executing JVM instruction. Once the instruction
is executed, the PC register is updated with the next instruction.

New Java8 Runtime data areas:


The Young generation - This further consists of one Eden Space and two survivor spaces. The VM
initially assigns all objects to Eden space, and most objects die there. When VM performs a minor GC, it
moves any remaining objects from the Eden space to one of the survivor spaces.

Tenured/Old Generation - VM moves objects that live long enough in the survivor spaces to the
"tenured" space in the old generation. When the tenured generation fills up, there is a full GC that is
often much slower because it involves all live objects.

3.Execution Engine: Once the bytecode has been loaded into the main memory, and details are
available in the runtime data area, the next step is to run the program. The Execution Engine handles
this by executing the code present in each class.

JIT Compiler

The JIT Compiler overcomes the disadvantage of the interpreter. The Execution Engine first uses the
interpreter to execute the byte code, but when it finds some repeated code, it uses the JIT compiler.

The JIT compiler then compiles the entire bytecode and changes it to native machine code. This native
machine code is used directly for repeated method calls, which improves the performance of the
system.

The JIT Compiler has the following components:

Intermediate Code Generator - generates intermediate code

Code Optimizer - optimizes the intermediate code for better performance

Target Code Generator - converts intermediate code to native machine code


Profiler - finds the hotspots (code that is executed repeatedly)

What is PermGen ?

Short form for Permanent Generation, PermGen is the memory area in Heap that is used by the JVM
to store class and method objects. If your application loads lots of classes, PermGen utilization will be
high. PermGen also holds ‘interned’ Strings

The size of the PermGen space is configured by the Java command line option -XX:MaxPermSize

Typically 256 MB should be more than enough of PermGen space for most of the applications

However, It is not unusal to see the error “java.lang.OutOfMemoryError: PermGen space“ if you are
loading unusual number of classes.

Gone are the days of OutOfMemory Errors due to PermGen space. With Java 8, there is NO PermGen.
That’s right. So no more OutOfMemory Errors due to PermGen

The key difference between PermGen and Metaspace is this: while PermGen is part of Java Heap
(Maximum size configured by -Xmx option), Metaspace is NOT part of Heap. Rather Metaspace is part
of Native Memory (process memory) which is only limited by the Host Operating System.

So, what is the significance of this change?

While you will NOT run out of PermGen space anymore (since there is NO PermGen), you may consume
excessive Native memory making the total process size large. The issue is, if your application loads lots
of classes (and/or interned strings), you may actually bring down the Entire Server (not just your
application). Why ? Because the native memory is only limited by the Operating System. This means you
can literally take up all the memory on the Server. Not good.

It is critical that you add the new option -XX:MaxMetaspaceSize which sets the Maximum Metaspace
size for your application.

Garbage Collector:

The Garbage Collector (GC) collects and removes unreferenced objects from the heap area. It is the
process of reclaiming the runtime unused memory automatically by destroying them.
Garbage collection makes Java memory efficient because because it removes the unreferenced objects
from heap memory and makes free space for new objects. It involves two phases:

Mark - in this step, the GC identifies the unused objects in memory

Sweep - in this step, the GC removes the objects identified during the previous phase

Garbage Collections is done automatically by the JVM at regular intervals and does not need to be
handled separately. It can also be triggered by calling System.gc(), but the execution is not guaranteed.

The JVM contains 3 different types of garbage collectors:

Serial GC - This is the simplest implementation of GC, and is designed for small applications running on
single-threaded environments. It uses a single thread for garbage collection. When it runs, it leads to a
"stop the world" event where the entire application is paused. The JVM argument to use Serial Garbage
Collector is -XX:+UseSerialGC

Parallel GC - This is the default implementation of GC in the JVM, and is also known as Throughput
Collector. It uses multiple threads for garbage collection, but still pauses the application when running.
The JVM argument to use Parallel Garbage Collector is -XX:+UseParallelGC.

Garbage First (G1) GC - G1GC was designed for multi-threaded applications that have a large heap size
available (more than 4GB). It partitions the heap into a set of equal size regions, and uses multiple
threads to scan them. G1GC identifies the regions with the most garbage and performs garbage
collection on that region first. The JVM argument to use G1 Garbage Collector is -XX:+UseG1GC

Note: There is another type of garbage collector called Concurrent Mark Sweep (CMS) GC. However, it
has been deprecated since Java 9 and completely removed in Java 14 in favour of G1GC.

Never invoke GC programmatically from within your code

Invoking System.gc() may have significant performance side effects on the application. GC by its design is
intelligent piece of code and it knows when to invoke partial or full collection. Whenever there is a need
it tries

1 http://docs.oracle.com/javase/7/docs/technotes/guides/management/jconsole.html

http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html

continued on 3535 Chapter - Concepts Cracking Java Interviews (Java 8, Hibernate & Spring)

to collect the space from Young Generation First (very low performance overhead), but when we force
our JVM to invoke System.gc(), JVM will do a Full GC which might pause your application for certain
amount of time, isn't that a bad approach then ? Let GC decide its timing.

Code Cache (non-heap): The HotSpot Java VM also includes a code cache, containing memory that is
used for compilation and storage of native code.
Common JVM Errors:

ClassNotFoundExcecption - This occurs when the Class Loader is trying to load classes using
Class.forName(), ClassLoader.loadClass() or ClassLoader.findSystemClass() but no definition for the class
with the specified name is found.

NoClassDefFoundError - This occurs when a compiler has successfully compiled the class, but the Class
Loader is not able to locate the class file at the runtime.

OutOfMemoryError - This occurs when the JVM cannot allocate an object because it is out of memory,
and no more memory could be made available by the garbage collector.

StackOverflowError - This occurs if the JVM runs out of space while creating new stack frames while
processing a thread.

OOPS:

Inheritance

Encapsulation

Abstraction

Polymorphism

https://www.geektrust.in/blog/2021/03/26/oops-and-clean-code-part-1/

https://www.geektrust.in/blog/2021/03/29/oops-clean-code-part-2/

Inheritance:

Not all classes and objects are unique in terms of their properties and methods.

More often than not, you can easily derive a few properties and methods as the common denominators
from a few classes.

These common properties and methods could be separated out and placed in a common class. All other
classes that need these properties and methods can then “extend” these from that common class. In
object oriented programming, this common class is called the base class or the parent class, and the
classes that extend this base class are known as derived classes, children classes,

or subclasses. Let’s look at an example.

Encapsulation:

Using encapsulation we can also hide the class data members (variables) from the other classes.

when designing classes based on their responsibilities.


This principle helps you remember that you should design your components by focusing on their
behavior

and by hiding their internal working adhering to encapsulation.

class Salaray{

private float amt;

public Salaray(){

this.amt*(per/100);

public incrementByPerc(){

Abstraction:

In some of your applications, you will encounter functionalities that a few classes have in common.

But at the same time, even though the functionality is the same, the implementation is not.

Abstract class:

Class and Abstract classes will maintain the state.

Interface:

They will not maintain the state. Only public abstract methods. No constructor.

Cannot be instantiated. That’s why variable are static

We can define pure contract. That why variables are final.

In Java8 in Interfaces Default methods introduced. After Interface is released we can still add Default
methods so that client code will not break.

In Java8 in Interfaces static methods introduced. Static methods can act as utility methods will not
preserve the state.
Prefer Interfaces over Abstract classes.

Refer Objects through Interfaces.

Polymorphism:

As the name suggests, is a principle that allows us to declare multiple methods with the same name but
different implementations.

There are two types here – static polymorphism and dynamic polymorphism.

Java is strictly pass by value. In case of primitive it will do a copy of variable and in case of Reference it
will create a copy of reference

java-->Partial Object oriented language

There are seven qualities to be satisfied for a programming language to be pure Object Oriented. They
are:

Encapsulation/Data Hiding

Inheritance

Polymorphism

Abstraction

All predefined types are objects-->java does not follow

All user defined types are objects-->java does not follow

All operations performed on objects must be only through methods exposed at the objects. -->java not
follows String str="A"+"B";

Enum:

Without Enums we used to write class like this.

public class Genre


{
//movie group name
public static final int MOVIE_GENRE_HORROR=0;
public static final int MOVIE_GENRE_DRAMA=0;

//book group name


public static final int BOOK_GENRE_BIOGRAPHY=10;
public static final int BOOK_GENRE_HORROR=10;

private Genre(){}

1.No Type safety

2.Brittle

3.No Name space protection

4. Not easy to print constant names

5.Cannot iterate over a group of constants.

If you use enums instead of integers (or String codes),

you increase compile-time checking and avoid errors from passing in invalid constants, and you
document which values are legal to use.

Every enum is internally implemented by using Class.

/* internally above enum Color is converted to

enum Color
{
RED, GREEN, BLUE;
}
class Color
{
public static final Color RED = new Color();
public static final Color BLUE = new Color();
public static final Color GREEN = new Color();
}*/
Every enum constant represents an object of type enum.

enum GeneralInformation{
NAME;
}

private static void print(GeneralInformation val){


System.out.println(val);
}
public static void main(String[] args) {
print(GeneralInformation.NAME);
}

20.JUNIT5-->https://kheri.net/mockito2-tutorial/

Threads:

For making blocking statement to be non-blocking (asynchronous behavior).

Thread Pool Executor:

Executors class provide simple implementation of ExecutorService using ThreadPoolExecutor but


ThreadPoolExecutor provides much more feature than that. We can specify the number of threads that
will be alive when we create ThreadPoolExecutor instance and we can limit the size of thread pool and
create our own RejectedExecutionHandler implementation to handle the jobs that can’t fit in the
worker queue. Here is our custom implementation of RejectedExecutionHandler interface.

• Group of threads readily available


• CPU intensive: Thread pool size=no of cores
• I/O task: Thread pool size>no of cores
• No need to manually create, start and join threads

public class MyMonitorThread implements Runnable


{
private ThreadPoolExecutor executor;
private int seconds;
private boolean run=true;

public MyMonitorThread(ThreadPoolExecutor executor, int delay)


{
this.executor = executor;
this.seconds=delay;
}
public void shutdown(){
this.run=false;
}
@Override
public void run()
{
while(run){
System.out.println(
String.format("[monitor] [%d/%d] Active: %d, Completed: %d, Task: %d,
isShutdown: %s, isTerminated: %s",
this.executor.getPoolSize(),
this.executor.getCorePoolSize(),
this.executor.getActiveCount(),
this.executor.getCompletedTaskCount(),
this.executor.getTaskCount(),
this.executor.isShutdown(),
this.executor.isTerminated()));
try {
Thread.sleep(seconds*1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}

}
}

public class WorkerPool {


public static void main(String args[]) throws InterruptedException{
//RejectedExecutionHandler implementation
RejectedExecutionHandlerImpl rejectionHandler = new RejectedExecutionHandlerImpl();
//Get the ThreadFactory implementation to use
ThreadFactory threadFactory = Executors.defaultThreadFactory();
//creating the ThreadPoolExecutor
ThreadPoolExecutor executorPool = new ThreadPoolExecutor(2, 4, 10, TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(2), threadFactory, rejectionHandler);
//start the monitoring thread
MyMonitorThread monitor = new MyMonitorThread(executorPool, 3);
Thread monitorThread = new Thread(monitor);
monitorThread.start();
//submit work to the thread pool
for(int i=0; i<10; i++){
executorPool.execute(new WorkerThread("cmd"+i));
}

Thread.sleep(30000);
//shut down the pool
executorPool.shutdown();
//shut down the monitor thread
Thread.sleep(5000);
monitor.shutdown();

}
}

Notice that while initializing the ThreadPoolExecutor, we are keeping initial pool size as 2, maximum
pool size to 4 and work queue size as 2. So if there are 4 running tasks and more tasks are submitted,
the work queue will hold only 2 of them and the rest of them will be handled by
RejectedExecutionHandlerImpl. Here is the output of the above program that confirms the above
statement.

Executor Service: Asynchronous task execution engine. Java5

This enabled coarse-grained task based parallelism in java

ExecutorService execService = Executors.newFixedThreadPool(5);


execService.execute(new Runnable() {
public void run() {
System.out.println(“asynchronous task created”);
}
});
execService.shutdown();

• The framework manages a homogeneous pool of worker threads.


• Thread pools will be tightly coupled with the work queue.
• The lifecycle of worker thread is as follows:
• Threads in the thread pool requests for new tasks from the work queue
• Executes the task provided by the work queue.
• Goes back to the thread pool for future tasks.
• Using thread pools over thread-per-task comes with lot of benefits as follows below:
• Reusing an already created thread which is waiting for the new task in the thread pool reduces
the costs of creation of a new thread for every new request, it will help in improving
responsiveness.
• The size of the thread pool plays a major role in keeping your processors busy and making
efficient usage of the resources, while not having many threads that your application runs out of
memory or pressurize the system while competing among threads for resources.

ExecutorService executor = Executors.newCachedThreadPool();


List<Callable<Integer>> listOfCallable = Arrays.asList(
() -> 1,
() -> 2,
() -> 3);
try {
List<Future<Integer>> futures = executor.invokeAll(listOfCallable);
int sum = futures.stream().map(f -> {
try {
return f.get();
} catch (Exception e) {
throw new IllegalStateException(e);
}
}).mapToInt(Integer::intValue).sum();
System.out.println(sum);
} catch (InterruptedException e) {// thread was interrupted
e.printStackTrace();
} finally {
// shut down the executor manually
executor.shutdown();
}
}
}

The main difference between Executor, ExecutorService, and Executors class is that Executor is the core
interface which is an abstraction for parallel execution. It separates tasks from execution, this is
different from java.lang.Thread class which combines both task and its execution.

ExecutorService is an extension of the Executor interface and provides a facility for returning a Future
object and terminate, or shut down the thread pool. Once the shutdown is called, the thread pool will
not accept new tasks but complete any pending task. It also provides a submit() method which
extends Executor.execute() method and returns a Future.
The Future object provides the facility of asynchronous execution, which means you don't need to wait
until the execution finishes, you can just submit the task and go around, come back and check if the
Future object has the result, if the execution is completed then it would have a result which you can
access by using the Future.get() method. Just remember that this method is a blocking method i.e. it will
wait until execution finish and the result is available if it's not finished already.

By using the Future object returned by ExecutorService.submit() method, you can also cancel the
execution if you are not interested anymore. It provides a cancel() method to cancel any pending
execution.

Another important difference between ExecutorService and Executor is that Executor defines execute()
method which accepts an object of the Runnable interface, while submit() method can accept objects of
both Runnable and Callable interfaces.

Executors is a utility class similar to Collections, which provides factory methods to create different
types of thread pools

Shutting down the ExecutorService

ExecutorService provides two methods for shutting down an executor -

shutdown() - when shutdown() method is called on an executor service, it stops accepting new tasks,
waits for previously submitted tasks to execute, and then terminates the executor.

shutdownNow() - this method interrupts the running task and shuts down the executor immediately.

Read more: https://javarevisited.blogspot.com/2017/02/difference-between-executor-


executorservice-and-executors-in-java.html#ixzz7w2iYxUc6

Read more: https://javarevisited.blogspot.com/2017/02/difference-between-executor-


executorservice-and-executors-in-java.html#ixzz7w2iMjnjg

Read more: https://javarevisited.blogspot.com/2017/02/difference-between-executor-


executorservice-and-executors-in-java.html#ixzz7w2i4bwEg

Executor and ExecutorService are the main Interfaces. ExecutorService can execute Runnable and
Callable tasks.

Executor interface has execute method. ExecutorService has submit(), invokeAny() and invokeAll().

Executors utility/Factory class that has Methods that create and return an ExecutorService.

public class ExecutorServiceTest {


public static void main(String[] args) {
ExecutorService s=Executors.newFixedThreadPool(10);
s.submit(new Task("Mythread1"));
s.submit(new Task("Mythread2"));
//s.shutdown();
}
}

Runnable r=
()->{
System.out.println("test");
};
ExecutorService es=Executors.newFixedThreadPool(10);
s.execute(r);//s.submit(r);

Callable<String> c=()->{
System.out.println("Callabe test");
return "test";
};
List<Callable<String>> al=new ArrayList<>();
al.add(c);
al.add(c);
es.invokeAll(al);
String result = executorService.invokeAny(callableTasks);

Problems with ExecutorService:

Keeping an unused ExecutorService alive:

Wrong thread-pool capacity while using fixed length thread pool:

Calling a Future‘s get() method after task cancellation:

Unexpectedly long blocking with Future‘s get() method:

CountDownLatch is a construct that a thread waits on while other threads count down on the latch until
it reaches zero. You use the Future to obtain a result from a submitted Callable, and you use a
CountDownLatch when you want to be notified when all threads have completed -- the two are not
directly related, and you use one, the other or both together when and where needed.

CountDownLatch latch = new CountDownLatch(4);


// Creating worker threads
Worker first = new Worker(1000, latch, "WORKER-1");
Worker second = new Worker(2000, latch, "WORKER-2");
Worker third = new Worker(3000, latch, "WORKER-3");
Worker fourth = new Worker(4000, latch, "WORKER-4");
// Starting above 4 threads
first.start();
second.start();
third.start();
fourth.start();
// The main task waits for four threads
latch.await();

class Worker extends Thread {


private int delay;
private CountDownLatch latch;

public Worker(int delay, CountDownLatch latch,


String name)
{
super(name);
this.delay = delay;
this.latch = latch;
}

@Override public void run()


{
try {
Thread.sleep(delay);
latch.countDown();
System.out.println(Thread.currentThread().getName() + " finished");
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
}

• In a hypothetical theater(Synchronization methods)


• It is called Mutex if only one person is allowed to watch the play.
• It is called Semaphore if N number of people are allowed to watch the play. If anybody leaves
the Theater during the play then other person can be allowed to watch the play.
• It is called CountDownLatch if no one is allowed to enter until every person vacates the theater.
Here each person has free will to leave the theater.
• It is called CyclicBarrier if the play will not start until every person enters the theater. Here a
showman can not start the show until all the persons enter and grab the seat. Once the play is
finished the same barrier will be applied for next show.
• Here, a person is a thread, a play is a resource.

CyclicBarrier is a synchronizer that allows a set of threads to wait for each other to reach a common
execution point, also called a barrier.

CyclicBarriers are used in programs in which we have a fixed number of threads that must wait for each
other to reach a common point before continuing execution.

The barrier is called cyclic because it can be re-used after the waiting threads are released.

CyclicBarier waits for certain number of threads while CountDownLatch waits for certain number of
events (one thread could call CountDownLatch.countDown() several times). CountDownLatch cannot be
reused once opened. Also the thread which calls CountDownLatch.countDown() only signals that it
finished some stuff. It doesn't block (and in CyclicBarrier.await() is blocking method) and could continue
to do some other stuff.

The CyclicBarrier uses an all-or-none breakage model for failed synchronization attempts: If a thread
leaves a barrier point prematurely because of interruption, failure, or timeout, all other threads waiting
at that barrier point will also leave abnormally via BrokenBarrierException (or InterruptedException if
they too were interrupted at about the same time).

cyclicBarrier = new CyclicBarrier(NUM_WORKERS, new AggregatorThread());

class NumberCruncherThread implements Runnable {

@Override
public void run() {
String thisThreadName = Thread.currentThread().getName();
List<Integer> partialResult = new ArrayList<>();

// Crunch some numbers and store the partial result


for (int i = 0; i < NUM_PARTIAL_RESULTS; i++) {
Integer num = random.nextInt(10);
System.out.println(thisThreadName
+ ": Crunching some numbers! Final
result - " + num);
partialResult.add(num);
}

partialResults.add(partialResult);
try {
System.out.println(thisThreadName
+ " waiting for others to reach
barrier.");
cyclicBarrier.await();
} catch (InterruptedException e) {
// ...
} catch (BrokenBarrierException e) {
// ...
}
}
}

public class CyclicBarrierDemo {

// Previous code

public void runSimulation(int numWorkers, int numberOfPartialResults) {


NUM_PARTIAL_RESULTS = numberOfPartialResults;
NUM_WORKERS = numWorkers;

cyclicBarrier = new CyclicBarrier(NUM_WORKERS, new AggregatorThread());

System.out.println("Spawning " + NUM_WORKERS


+ " worker threads to compute "
+ NUM_PARTIAL_RESULTS + " partial results each");

for (int i = 0; i < NUM_WORKERS; i++) {


Thread worker = new Thread(new NumberCruncherThread());
worker.setName("Thread " + i);
worker.start();
}
}

public static void main(String[] args) {


CyclicBarrierDemo demo = new CyclicBarrierDemo();
demo.runSimulation(5, 3);
}
}
Limitations of the Future

• A Future cannot be mutually complete.


• We cannot perform further action on a Future's result without blocking.
• Future has not any exception handling.
• We cannot combine multiple futures.

CompletableFuture: Asynchronous reactive functional programming API.


It created to solve the limitations of Future API.

Java 8 introduced the CompletableFuture class. Along with the Future interface, it also implemented the
CompletionStage interface.

This interface defines the contract for an asynchronous computation step that we can combine with
other steps.

Combining Futures

The best part of the CompletableFuture API is the ability to combine CompletableFuture instances in a
chain of computation steps.

Running asynchronous computation using runAsync():

If you want to run some background task asynchronously and don’t want to return anything from the
task, then you can use CompletableFuture.runAsync() method. It takes a Runnable object and returns
CompletableFuture<Void>.

// Run a task specified by a Runnable Object asynchronously.


CompletableFuture<Void> future = CompletableFuture.runAsync(new Runnable() {
@Override
public void run() {
// Simulate a long-running Job
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
System.out.println("I'll run in a separate thread than the main thread.");
}
});

// Block and wait for the future to complete


future.get()

Run a task asynchronously and return the result using supplyAsync():

CompletableFuture.runAsync() is useful for tasks that don’t return anything. But what if you want to
return some result from your background task?

Well, CompletableFuture.supplyAsync() is your companion. It takes a Supplier<T> and


returns CompletableFuture<T> where T is the type of the value obtained by calling the given supplier –
A Supplier<T> is a simple functional interface which represents a supplier of results. It has a
single get() method where you can write your background task and return the result.

// Run a task specified by a Supplier object asynchronously


CompletableFuture<String> future = CompletableFuture.supplyAsync(new Supplier<String>() {
@Override
public String get() {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return "Result of the asynchronous computation";
}
});

// Block and get the result of the Future


String result = future.get();
System.out.println(result);

1. static Supplier<String> sup=()->"Hello";


public static void main(String[] args) {
CompletableFutureTest cft= new CompletableFutureTest(); //supplier
CompletableFuture.supplyAsync(sup).thenApply((s)->s.toUpperCase()).//function
thenAccept((s1)->System.out.println(s1));//consumer
}

2.CompletableFuture<String> completableFuture
= CompletableFuture.supplyAsync(() -> "Hello")
.thenCompose(s -> CompletableFuture.supplyAsync(() -> s + " World"));

3.CompletableFuture<OrchestrationContext> completableFuture =
CompletableFuture.supplyAsync(() -> {
return
prepareWorkflow(initialContext);
}).thenApply(stage1Context -> {
return performObjectCreation(stage1Context);

}).handle((finalContext, ex) -> {


return performFinalResponse(finalContext,
initialContext.getOrchRequest().getProductid(), ex);
});
completableFuture.get();

Transforming and acting on a CompletableFuture:

The CompletableFuture.get() method is blocking. It waits until the Future is completed and returns the
result after its completion.

But, that’s not what we want right? For building asynchronous systems we should be able to attach a
callback to the CompletableFuture which should automatically get called when the Future completes.

That way, we won’t need to wait for the result, and we can write the logic that needs to be executed
after the completion of the Future inside our callback function.

You can attach a callback to the CompletableFuture


using thenApply(), thenAccept() and thenRun() methods -

thenApply: You can use thenApply() method to process and transform the result of a
CompletableFuture when it arrives. It takes a Function<T,R> as an argument. Function<T,R> is a simple
functional interface representing a function that accepts an argument of type T and produces a result of
type R –

// Create a CompletableFuture
CompletableFuture<String> whatsYourNameFuture = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return "Rajeev";
});

// Attach a callback to the Future using thenApply()


CompletableFuture<String> greetingFuture = whatsYourNameFuture.thenApply(name -> {
return "Hello " + name;
});

// Block and get the result of the future.


System.out.println(greetingFuture.get()); // Hello Rajeev

thenApply():

You can also write a sequence of transformations on the CompletableFuture by attaching a series of
thenApply() callback methods. The result of one thenApply() method is passed to the next in the series –
CompletableFuture<String> welcomeText = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return "Rajeev";
}).thenApply(name -> {
return "Hello " + name;
}).thenApply(greeting -> {
return greeting + ", Welcome to the CalliCoder Blog";
});

System.out.println(welcomeText.get());
// Prints - Hello Rajeev, Welcome to the CalliCoder Blog

thenAccept() and thenRun():

If you don’t want to return anything from your callback function and just want to run some piece of
code after the completion of the Future, then you can use thenAccept() and thenRun() methods. These
methods are consumers and are often used as the last callback in the callback chain.

CompletableFuture.thenAccept() takes a Consumer<T> and returns CompletableFuture<Void>. It has


access to the result of the CompletableFuture on which it is attached.

CompletableFuture.supplyAsync(() -> {
return ProductService.getProductDetail(productId);
}).thenAccept(product -> {
System.out.println("Got product detail from remote service " + product.getName())
});

While thenAccept() has access to the result of the CompletableFuture on which it is


attached, thenRun() doesn’t even have access to the Future’s result. It takes a Runnable and
returns CompletableFuture<Void> -

Combine two dependent futures using thenCompose():

Let’s say that you want to fetch the details of a user from a remote API service and once the user’s detail
is available, you want to fetch his Credit rating from another service.

CompletableFuture<User> getUsersDetail(String userId) {


return CompletableFuture.supplyAsync(() -> {
return UserService.getUserDetails(userId);
});
}

CompletableFuture<Double> getCreditRating(User user) {


return CompletableFuture.supplyAsync(() -> {
return CreditRatingService.getCreditRating(user);
});
}
CompletableFuture<CompletableFuture<Double>> result = getUserDetail(userId)
.thenApply(user -> getCreditRating(user));

In earlier examples, the Supplier function passed to thenApply() callback would return a simple value but
in this case, it is returning a CompletableFuture. Therefore, the final result in the above case is a nested
CompletableFuture.

If you want the final result to be a top-level Future, use thenCompose() method instead -

CompletableFuture<Double> result = getUserDetail(userId)


.thenCompose(user -> getCreditRating(user));

Combine two independent futures using thenCombine()

While thenCompose() is used to combine two Futures where one future is dependent on the
other, thenCombine() is used when you want two Futures to run independently and do something after
both are complete.

System.out.println("Retrieving weight.");
CompletableFuture<Double> weightInKgFuture = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return 65.0;
});

System.out.println("Retrieving height.");
CompletableFuture<Double> heightInCmFuture = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return 177.8;
});

System.out.println("Calculating BMI.");
CompletableFuture<Double> combinedFuture = weightInKgFuture
.thenCombine(heightInCmFuture, (weightInKg, heightInCm) -> {
Double heightInMeter = heightInCm/100;
return weightInKg/(heightInMeter*heightInMeter);
});

System.out.println("Your BMI is - " + combinedFuture.get());

Running Multiple Futures in Parallel

When we need to execute multiple Futures in parallel, we usually want to wait for all of them to execute
and then process their combined results.

The CompletableFuture.allOf static method allows to wait for the completion of all of the Futures
provided as a var-arg:

CompletableFuture<String> future1
= CompletableFuture.supplyAsync(() -> "Hello");
CompletableFuture<String> future2
= CompletableFuture.supplyAsync(() -> "Beautiful");
CompletableFuture<String> future3
= CompletableFuture.supplyAsync(() -> "World");

CompletableFuture<Void> combinedFuture
= CompletableFuture.allOf(future1, future2, future3);

// ...

combinedFuture.get();

Error Handling:

CompletableFuture<String> completableFuture
= CompletableFuture.supplyAsync(() -> {
if (name == null) {
throw new RuntimeException("Computation error!");
}
return "Hello, " + name;
}).handle((s, t) -> s != null ? s : "Hello, Stranger!");
assertEquals("Hello, Stranger!", completableFuture.get());

Fork-join framework:java7

Executor Framework was enhanced to support fork-join tasks, which will run by a special kind of
executor service known as a fork-join pool.

To achieve Data parallelism: task divided into multiple subtasks until it reaches its least possible size and
execute those tasks in parallel.

Work-Stealing Algorithm: Simply put, free threads try to “steal” work from deques of busy threads.

ForkJoinTask<V>

ForkJoinTask is the base type for tasks executed inside ForkJoinPool. In practice, one of its two
subclasses should be extended: the RecursiveAction for void tasks and the RecursiveTask<V> for tasks
that return a value. They both have an abstract method compute() in which the task’s logic is defined.

RecursiveTask: return something

In this example, we use an array stored in the arr field of the CustomRecursiveTask class to represent the
work. The createSubtasks() method recursively divides the task into smaller pieces of work until each
piece is smaller than the threshold. Then the invokeAll() method submits the subtasks to the common
pool and returns a list of Future.

To trigger execution, the join() method is called for each subtask.

public class CustomRecursiveTask extends RecursiveTask<Integer> {


private int[] arr;

private static final int THRESHOLD = 20;

public CustomRecursiveTask(int[] arr) {


this.arr = arr;
}

@Override
protected Integer compute() {
if (arr.length > THRESHOLD) {
return ForkJoinTask.invokeAll(createSubtasks())
.stream()
.mapToInt(ForkJoinTask::join)
.sum();
} else {
return processing(arr);
}
}

private Collection<CustomRecursiveTask> createSubtasks() {


List<CustomRecursiveTask> dividedTasks = new ArrayList<>();
dividedTasks.add(new CustomRecursiveTask(
Arrays.copyOfRange(arr, 0, arr.length / 2)));
dividedTasks.add(new CustomRecursiveTask(
Arrays.copyOfRange(arr, arr.length / 2, arr.length)));
return dividedTasks;
}

private Integer processing(int[] arr) {


return Arrays.stream(arr)
.filter(a -> a > 10 && a < 27)
.map(a -> a * 10)
.sum();
}
}

RecursiveAction: will not return anything

the example splits the task if workload.length() is larger than a specified threshold using
the createSubtask() method.

The String is recursively divided into substrings, creating CustomRecursiveTask instances that are based
on these substrings.

As a result, the method returns a List<CustomRecursiveAction>.

The list is submitted to the ForkJoinPool using the invokeAll() method:

public class CustomRecursiveAction extends RecursiveAction {

private String workload = "";


private static final int THRESHOLD = 4;

private static Logger logger =


Logger.getAnonymousLogger();

public CustomRecursiveAction(String workload) {


this.workload = workload;
}

@Override
protected void compute() {
if (workload.length() > THRESHOLD) {
ForkJoinTask.invokeAll(createSubtasks());
} else {
processing(workload);
}
}

private List<CustomRecursiveAction> createSubtasks() {


List<CustomRecursiveAction> subtasks = new ArrayList<>();

String partOne = workload.substring(0, workload.length() / 2);


String partTwo = workload.substring(workload.length() / 2, workload.length());

subtasks.add(new CustomRecursiveAction(partOne));
subtasks.add(new CustomRecursiveAction(partTwo));

return subtasks;
}

private void processing(String work) {


String result = work.toUpperCase();
logger.info("This result - (" + result + ") - was processed by "
+ Thread.currentThread().getName());
}
}
public static ForkJoinPool forkJoinPool = new ForkJoinPool(2);

With ForkJoinPool’s constructors, we can create a custom thread pool with a specific level of parallelism,
thread factory and exception handler. Here the pool has a parallelism level of 2. This means that pool
will use two processor cores.

Java8 onwards we should use Parallel Streams instead of ForkJoin.

ParalellStreams:

Split→Spliterator

Execute→ForkJoin

Combine→Collect

int sumOfWeights = widgets.parallelStream()


.filter(b -> b.getColor() == RED)
.mapToInt(b -> b.getWeight())
.sum();

Intrinsic(Synchronised) and Reintrant/Explicit locks(trylock()):

1) Another significant difference between ReentrantLock and synchronized keyword is fairness.

synchronized keyword doesn't support fairness. Any thread can acquire lock once released, no
preference can be specified,
on the other hand you can make ReentrantLock fair by specifying fairness property, while creating
instance of ReentrantLock. Fairness property provides lock to longest waiting thread, in case of
contention.

2) Second difference between synchronized and Reentrant lock is tryLock() method.

ReentrantLock provides convenient tryLock() method, which acquires lock only if its available

or not held by any other thread. This reduce blocking of thread waiting for lock in Java application.

3) One more worth noting difference between ReentrantLock and synchronized keyword in Java is,

ability to interrupt Thread while waiting for Lock.

In case of synchronized keyword, a thread can be blocked waiting for lock,

for an indefinite period of time and there was no way to control that.

ReentrantLock provides a method called lockInterruptibly(), which can be used to interrupt thread
when it is waiting for lock.

Similarly tryLock() with timeout can be used to timeout if lock is not available in certain time period.

4) ReentrantLock also provides convenient method to get List of all threads waiting for lock.

Volatile: volatile is a keyword. volatile forces all threads to get latest value of the variable from main
memory instead of cache.

No locking is required to access volatile variables. All threads can access volatile variable value at same
time.

Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile
variable establishes

a happens-before relationship with subsequent reads of that same variable.

This means that changes to a volatile variable are always visible to other threads.

What's more, it also means that when a thread reads a volatile variable,

it sees not just the latest change to the volatile, but also the side effects of the code that led up the
change.

When to use: One thread modifies the data and other threads have to read latest value of data.

Other threads will take some action but they won't update data.
https://www.callicoder.com/java-locks-and-atomic-variables-tutorial/

https://www.callicoder.com/java-concurrency-issues-and-thread-synchronization/

AtomicXXX:

AtomicXXX classes support lock-free thread-safe programming on single variables. These AtomicXXX
classes (like AtomicInteger)

resolves memory inconsistency errors / side effects of modification of volatile variables, which have
been accessed in multiple threads.

When to use: Multiple threads can read and modify data.

They depend upon compare-and-swap (CAS) mechanism, to ensure data integrity.

A typical CAS operation works on three operands:

The memory location on which to operate (M)

The existing expected value (A) of the variable

The new value (B) which needs to be set

synchronized:

synchronized is keyword used to guard a method or code block. By making method as synchronized has
two effects:

First, it is not possible for two invocations of synchronized methods on the same object to interleave.

When one thread is executing a synchronized method for an object, all other threads that invoke
synchronized methods for the same object

block (suspend execution) until the first thread is done with the object.

Second, when a synchronized method exits, it automatically establishes a happens-before relationship


with any

subsequent invocation of a synchronized method for the same object. This guarantees that changes to
the state of the object

are visible to all threads.

When to use: Multiple threads can read and modify data. Your business logic not only update the data
but

also executes atomic operations

AtomicXXX is equivalent of volatile + synchronized even though the implementation is different.

AtomicXXX extends volatile variables + compareAndSet methods but does not use synchronization.
TDD Vs BDD Vs ATDD

The simple concept of TDD is to write and correct the failed tests before writing new code (before
development)

BDD gives a clearer understanding as to what the system should do from the perspective of the
developer and the customer.

TDD allows a good and robust design, still, your tests can be very far away of the users requirements.

BDD is a way to ensure consistency between requirements and the developer tests.

Why Use Nested Classes?

It is a way of logically grouping classes that are only used in one place: If a class is useful to only one
other class, then it is logical to embed it in that class and keep the two together. Nesting such "helper
classes" makes their package more streamlined.

It increases encapsulation: Consider two top-level classes, A and B, where B needs access to members of
A that would otherwise be declared private. By hiding class B within class A, A's members can be
declared private and B can access them. In addition, B itself can be hidden from the outside world.

It can lead to more readable and maintainable code: Nesting small classes within top-level classes places
the code closer to where it is used.

Static nested classes: can access outer class static members

EnclosingClass.Nested n1 = new EnclosingClass.Nested()

Non-static nested classes (Inner classes):Inner class can access both static and non-static members of
the Outer class een if they are private.

Outer outer = new Outer();

Outer.Inner inner = outer.new Inner();

Local classes:

Local classes are a special type of inner classes – in which the class is defined inside a method or scope
block.

They have access to both static and non-static members in the enclosing context

They can only define instance members


void run() {

class Local {

void run() {

// method implementation

Local local = new Local();

local.run();

Anonymous classes:

They are inner classes with no name. Since they have no name, we can't use them in order to create
instances of anonymous classes.

As a result, we have to declare and instantiate anonymous classes in a single expression at the point of
use.

Extending a class:

new Book("Design Patterns") {

@Override

public String description() {

return "Famous GoF book.";

we create anonymous class instances at the same moment as we declare them.from anonymous class
instances,

we can access local variables and enclosing class's members

Implementing an interface:

int count = 1;

Runnable action = new Runnable() {

@Override
public void run() {

System.out.println("Runnable with captured variables: " + count);

};

-----

uses:

button.addActionListener(new ActionListener() {

public void actionPerformed(ActionEvent e) {

...

java references:

Strong/Hard reference: The object can't be garbage collected if it's reachable through any strong
reference.

MyClass obj = new MyClass ();

Weak Reference:

If you only have weak references to an object (with no strong references), then the object will be
reclaimed by GC in the very next GC cycle.

WeakReference<Cache> cache = new WeakReference<Cache>(data);

Soft Reference:

If you only have soft references to an object (with no strong references), then the object will be
reclaimed by GC only when JVM runs out of memory.

Phantom Reference: The objects which are being referenced by phantom references are eligible for
garbage collection.

But, before removing them from the memory, JVM puts them in a queue called ‘reference queue’.

They are put in a reference queue after calling finalize() method on them

Read more: https://javarevisited.blogspot.com/2014/03/difference-between-weakreference-vs-


softreference-phantom-strong-reference-java.html#ixzz7Bt12ulUG
Java Asynchronous & Non-Blocking programming With Reactor:

Asynchronous: Asynchronous literally means not synchronous. Email is asynchronous. You send a mail,
you don't expect to get a response NOW. But it is not non-blocking. Essentially what it means is an
architecture where "components" send messages to each other without expecting a response
immediately. HTTP requests are synchronous. Send a request and get a response.

Non-Blocking: This term is mostly used with IO. What this means is that when you make a system call, it
will return immediately with whatever result it has without putting your thread to sleep (with high
probability). For example non-blocking read/write calls return with whatever they can do and expect
caller to execute the call again. try_lock for example is non-blocking call. It will lock only if lock can be
acquired. Usual semantics for systems calls is blocking. read will wait until it has some data and put
calling thread to sleep.

Thread-per-Request Programming:

Lets consider we would like to perform the below task.

Execute a DB query based on the given input parameters

Process the DB query result (say lowercase to uppercase)

Write the processed result into a file.

Here steps 1 and 3 are I/O tasks and 2 is a computation task. Above 3 steps are synchronous blocking
calls. That is step 2 can not be done when the step 1 is not completed and step 3 can not be done until
step 2 is completed. Lets assume that the above DB query is a multi-table join query which will take
some time to execute. Lets also assume that We could receive hundreds of concurrent requests to do
the above task.

We normally encapsulate all the steps to be performed by creating an object. When we receive
concurrent requests, in the traditional programming model, we create a thread and create an object for
the request; and the thread executes the steps using the object. the thread will perform the steps. If we
receive hundreds of requests, then we create hundreds of threads. The problem here is – threads
consume memory! That is, Java has to allocate a procedure stack with considerable amount of memory
to the thread for the execution. While each and every thread is performing the IO tasks, it has to wait
for the step to be completed before proceeding with next steps. For ex: The thread has to wait for the
DB server to execute the query and returns the result. As thread consumes memory and if it waits, then
it is not practical and scalable. Then it sets a limit to number of threads we can create for the
application.
Event-Driven Programming:

The event-driven programming is based on asynchronous procedure call. That is, we split the above 1
big synchronous task into 3 synchronous simple tasks (ie each step becomes a task). All these tasks are
queued. All these tasks are read from a queue one by one and executed using dedicated worker threads
from a thread-pool. When there are no tasks in queue, worker threads would simply wait for the tasks
to arrive. Usually number of worker threads would be small enough to match the number of available
processors. This way, You can have 10000 tasks in the queue and process them all efficiently – but we
can not create 10000 threads in a single machine.

So the event-driven programming model is NOT going to increase the performance (up to certain limit)
or overall the response time could be still same. But it can process more number of concurrent requests
efficiently!

Reactive Programming:

Reactive programming is event-driven programming or special case of event-driven programming

1.Sync + Blocking

2.Async

3.non- blocking

4. non- blocking + Async (Reactive programming)

Reactive programming is a declarative programming paradigm / an asynchronous programming style in


which we use an event based model to push the data streams to the consumers / observers as and
when the data is available / updated. It is completely asynchronous and non-blocking.

In reactive programming, threads are not blocked or waiting for a request to complete. Instead they are
notified when the request is complete / the data changes. Till then they can do other tasks. This makes
us to use less resources to serve more requests.

Reactive programming is based on Observer design pattern. It has following interfaces / components.

• Publisher / Observable
• Observer / Subscriber
• Subscription
• Processor / Operators
Publisher:

Publisher is observable whom we are interested in listening to! These are the data sources or streams.

Observer:

Observer subscribes to Observable/Publisher. Observer reacts to the data emitted by the Publisher.
Publisher pushes the data to the Observers. Publishers are read-only whereas Observers are write-only.

Subscription:

Observer subscribes to Observable/Publisher via an object called Subscription. Publisher’s emitting rate
might be higher than Observer’s processing rate. So in that case, Observer might send some feedback to
the Publisher how it wants the data / what needs to be done when the publisher’s emitting rate is high
via Subscription object.

For example:

We keep the volume up or down based on the current volume when we listen to music.

We slow down the speed of a talk in YouTube if the speaker is too fast.

Processor:

Processor acts as both Publisher and Observer. They stay in between a Publisher and an Observer. It
consumes the messages from a publisher, manipulates it and sends the processed message to its
subscribers. They can be used to chain multiple processors in between a Publisher or an Observer.

Reactive streams specification: it is an initiative to provide a standard for asynchronous stream


processing with non-blocking back pressure
Reactive Stream Implementation:

Reactive Stream is a specification which specifies the above 4 interfaces to standardize the programming
libraries for Java. Some of the implementations are

• Project Reactor
• RxJava2
• Akka

Reactive Programming pillars:

1.Asynchronus data processing

2.Non-blocking

3.Fcuntional Style/Declarative.

Schedulers:

Other than Observable and Observer, another key point in Reactive programming is Schedulers! Behind
the scenes, asynchronous behavior is achieved using threads. Reactive libraries hide the complexity and
provide rich set of APIs to manage threads for Observables and Observers. It has 2 important methods.

subscribeOn – specifies the threads on which observable should operate or the thread pool to be used
by the source to emit the data.

publishOn – specifies the threads on which observers should operate. Any publishOn affects the thread
pools used by the subsequent observers.

Publisher Types:
There are 2 types of Observables / Publishers

Cold Publisher

This is lazy

Starts producing/emitting only when a subscriber subscribes to this publisher

Publisher creates a data producer and generates new sets of values for each new subscription

When there are multiple observers, each observer might get different values

Example: Netflix. Movie will start streaming only if the subscriber wants to watch. Each subscriber can
watch a movie any time from the beginning

Hot Publisher

Values are generated outside the publisher even when there are no observers.

There will be only one data producer

All the observers get the value from the single data producer irrespective of the time they started
subscribing to the publisher. It means any new observer might not see the old value emitted by the
publisher.

Example: Radio Stream. Listeners will start listening to the song currently playing. It might not be from
the beginning.

A Publisher is a provider of a potentially unbounded number of sequenced elements, publishing them


according to the demand received from its Subscriber(s). Reactor-core has a set of implementations of
this Publisher interface. The 2 important implementations from which we would be creating sequences
are Flux and Mono.

Reactor is a library or implementation for React specification.

1.Mono→emits only 0 or 1 item

2.Flux→emits 0 or n items

The core difference is that Reactive is a push model, whereas the Java 8 Streams are a pull model. In a
reactive approach, events are pushed to the subscribers as they come in.

Flux:

Flux is an implementation of Publisher. It will emit 0 . . . N elements and/or a complete or an error call.
(Image courtesy: project reactor site). Stream pipeline is synchronous whereas Flux pipeline is
completely asynchronous.
empty →To emit 0 element / or return empty Flux<T>

Flux.empty()
.subscribe(i -> System.out.println("Received : " + i));

//No output

just →The easiest way to emit an element is using the just method.

subscribe method accepts a Consumer<T> where we define what we do with the emitted element.
Flux.just(1)
.subscribe(i -> System.out.println("Received : " + i));

//Output
Received : 1

Unlike stream, Flux can have any number of Observers connected to the pipeline. We can also write like
this if we need to connect more than 1 observers to a source.

1 observer might be collecting all elements into a list while other observer could be logging the element
details.

Flux<Integer> flux = Flux.just(1);//Observer 1


flux.subscribe(i -> System.out.println("Observer-1 : " + i));//Observer 2
flux.subscribe(i -> System.out.println("Observer-2 : " + i));

//Output
Observer-1 : 1
Observer-2 : 1
just with arbitrary elements

Flux.just('a', 'b', 'c')


.subscribe(i -> System.out.println("Received : " + i));

//Output
Received : a
Received : b
Received : c

We can have multiple observer and each observer will the process the emitted elements independently.
They might take their own time. Everything happens asynchronously.

The below output shows that the entire pipeline is executed asynchronously by default.

System.out.println("Starts");

//flux emits one element per second


Flux<Character> flux = Flux.just('a', 'b', 'c', 'd')
.delayElements(Duration.ofSeconds(1));//Observer 1 - takes 500ms
//to process
flux
.map(Character::toUpperCase)
.subscribe(i -> {
sleep(500);
System.out.println("Observer-1 : " + i);
});//Observer 2 - process immediately
flux.subscribe(i -> System.out.println("Observer-2 : " + i));

System.out.println("Ends");

//Just to block the execution - otherwise the program will end only with start and end
messages//Output
Starts
Ends
Observer-2 : a
Observer-1 : A
Observer-2 : b
Observer-1 : B
Observer-2 : c
Observer-2 : d
Observer-1 : C
Observer-1 : D
In the above code, I added below log method to better understand the behavior.

We have 2 observers subscribed to the source. This is why we have onSubscribe method

request(32) — here 32 is the default buffer size. Observer requests for 32 elements to buffer/emit.

elements are emitted one-by-one.

Once all the elements are emitted. complete call is invoked to inform the observers not to expect any
more elements.

Flux<Character> flux = Flux.just('a', 'b', 'c', 'd')


.log()
.delayElements(Duration.ofSeconds(1));
Output:
[ INFO] (main) | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
[ INFO] (main) | request(32)
[ INFO] (main) | onNext(a)
[ INFO] (main) | onNext(b)
[ INFO] (main) | onNext(c)
[ INFO] (main) | onNext(d)
[ INFO] (main) | onComplete()
[ INFO] (main) | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
[ INFO] (main) | request(32)
[ INFO] (main) | onNext(a)
[ INFO] (main) | onNext(b)
[ INFO] (main) | onNext(c)
[ INFO] (main) | onNext(d)
[ INFO] (main) | onComplete()

The subscribe method could accept other parameters as well to handle the error and completion calls.
So far we have been consuming the elements received via the pipeline. But we could also get some
unhandled exception. We can pass the handlers as shown here.
subscribe(
i -> System.out.println("Received :: " + i),
err -> System.out.println("Error :: " + err),
() -> System.out.println("Successfully completed"))

Lets take this example. We get the below output as expected. Here we simply divide 10 by each
element.
Flux.just(1,2,3)
.map(i -> 10 / i)
.subscribe(
i -> System.out.println("Received :: " + i),
err -> System.out.println("Error :: " + err),
() -> System.out.println("Successfully completed"));

//Output
Received :: 10
Received :: 5
Received :: 3
Successfully completed

Now if we slightly modify our map operation as shown here — we would be doing division by zero which
will throw RunTimeException which is handled so well here without the ugly try/catch block .

Flux.just(1,2,3)
.map(i -> i / (i-2))
.subscribe(
i -> System.out.println("Received :: " + i),
err -> System.out.println("Error :: " + err),
() -> System.out.println("Successfully completed"));

//Output
Received :: -1
Error :: java.lang.ArithmeticException: / by zero

fromArray — when you have array. just should also work here.
String[] arr = {"Hi", "Hello", "How are you"};

Flux.fromArray(arr)
.filter(s -> s.length() > 2)
.subscribe(i -> System.out.println("Received : " + i));

//Output
Received : Hello
Received : How are you
fromIterable — When you have collection of elements and like to pass them via Flux pipeline.
List<String> list = Arrays.asList("vins", "guru");
Flux<String> stringFlux = Flux.fromIterable(list)
.map(String::toUpperCase);

fromStream — If you have stream of elements.


List<String> list = Arrays.asList("vins", "guru");
Flux<String> stringFlux = Flux.fromStream(list.stream())
.map(String::toUpperCase);

Be careful with Streams!! Flux can have more than 1 observer. But below code will throw error saying
that the stream has been closed.

//observer-1
stringFlux
.map(String::length)
.subscribe(i -> System.out.println("Observer-1 :: " + i));//observer-2
stringFlux
.subscribe(i -> System.out.println("Observer-2 :: " + i));

The above problem can be fixed by using Supplier<Stream>


Flux.fromStream(() -> list.stream())
.map(String::toUpperCase);
range
//To provide a range of numbers
Flux.range(3, 5)

In all the above options, we already have elements found before emitting. What if we need to keep on
finding and emitting elements programmatically? Flux has 2 additional methods for that. But these 2
methods need a separate article to explain as we need to understand what they are for and when to use
what! Flux.create, Flux.generate
Flux.just(1, 2, 3, 4)
.log()
.subscribe(new Subscriber<Integer>() {
@Override
public void onSubscribe(Subscription s) {
s.request(Long.MAX_VALUE);
}

@Override
public void onNext(Integer integer) {
elements.add(integer);
}

@Override
public void onError(Throwable t) {}

@Override
public void onComplete() {}
});

1. onSubscribe() – This is called when we subscribe to our stream


2. request(unbounded) – When we call subscribe, behind the scenes we are creating
a Subscription. This subscription requests elements from the stream. In this case, it defaults
to unbounded, meaning it requests every single element available
3. onNext() – This is called on every single element
4. onComplete() – This is called last, after receiving the last element. There's actually
an onError() as well, which would be called if there is an exception

Backpressure:

Backpressure is when a downstream can tell an upstream to send it less data in order to prevent it from
being overwhelmed.

We can modify our Subscriber implementation to apply backpressure. Let's tell the upstream to only
send two elements at a time by using request():

Flux.just(1, 2, 3, 4)
.log()
.subscribe(new Subscriber<Integer>() {
private Subscription s;
int onNextAmount;

@Override
public void onSubscribe(Subscription s) {
this.s = s;
s.request(2);
}
@Override
public void onNext(Integer integer) {
elements.add(integer);
onNextAmount++;
if (onNextAmount % 2 == 0) {
s.request(2);
}
}

@Override
public void onError(Throwable t) {}

@Override
public void onComplete() {}
});

Combining Two Streams: We can then make things more interesting by combining another stream with
this one. Let's try this by using zip() function:
Flux.just(1, 2, 3, 4)
.log()
.map(i -> i * 2)
.zipWith(Flux.range(0, Integer.MAX_VALUE),
(one, two) -> String.format("First Flux: %d, Second Flux: %d", one, two))
.subscribe(elements::add);

assertThat(elements).containsExactly(
"First Flux: 2, Second Flux: 0",
"First Flux: 4, Second Flux: 1",
"First Flux: 6, Second Flux: 2",
"First Flux: 8, Second Flux: 3");

Hot Streams: For example, we could have a stream of mouse movements that constantly needs to be
reacted to or a Twitter feed. These types of streams are called hot streams, as they are always running
and can be subscribed to at any point in time, missing the start of the data.
ConnectableFlux: One way to create a hot stream is by converting a cold stream into one. Let's create a
Flux that lasts forever, outputting the results to the console, which would simulate an infinite stream of
data coming from an external resource
ConnectableFlux<Object> publish = Flux.create(fluxSink -> {
while(true) {
fluxSink.next(System.currentTimeMillis());
}
})
.publish();
publish.subscribe(System.out::println);
publish.subscribe(System.out::println);
// If we try running this code, nothing will happen. It's not until we call connect(), that
the Flux will start emitting:
publish.connect();

Concurrency: All of our above examples have currently run on the main thread. However, we can
control which thread our code runs on if we want. The Scheduler interface provides an abstraction
around asynchronous code, for which many implementations are provided for us. Let's try subscribing to
a different thread to main:
Flux.just(1, 2, 3, 4)
.log()
.map(i -> i * 2)
.subscribeOn(Schedulers.parallel())
.subscribe(elements::add);

Mono: Mono is an another implementation of Publisher. It emits at most one item and then (optionally)
terminates with an onComplete signal or an onError signal.. (Image courtesy: project reactor site). Like
Flux, Mono is also asynchronous in nature.
just — to emit one single item

Mono.just(1)
.subscribe(System.out::println);

Both Flux and Mono extends the Publisher<T> interface.

Publisher<Integer> publisher1 = Mono.just(1);


Publisher<Integer> publisher2 = Flux.just(1,2,3);

Using Callable/Supplier

Mono.fromCallable(() -> 1);


Mono.fromSupplier(() -> "a");

fromRunnable — We know that runnable does not accept any parameter and does not return anything
either. So what do you think the below code will do?

Mono.fromRunnable(() -> System.out.println("Hello"))


.subscribe(i -> System.out.println("Received :: " + i));

The above code would just print “Hello” and nothing else will happen as it is because there is no item to
emit. But if we add the error and complete handler, we get the below output. It is helpful if we need to
be notified when a runnable is completed.
Mono.fromRunnable(() -> System.out.println("Hello"))
.subscribe(
i -> System.out.println("Received :: " + i),
err -> System.out.println("Error :: " + err),
() -> System.out.println("Successfully completed"));

//Output
Hello
Successfully completed

ConcurrentModificationExcetpion:

java.util.ConcurrentModificationException is a very common exception when working with Java


collection classes. Java Collection classes are fail-fast, which means if the Collection will be changed
while some thread is traversing over it using iterator, the iterator.next() will throw
ConcurrentModificationException. Concurrent modification exception can come in the case of
multithreaded as well as a single-threaded Java programming environment.

https://www.java67.com/2015/10/how-to-solve-concurrentmodificationexception-in-java-
arraylist.html

https://www.digitalocean.com/community/tutorials/java-util-concurrentmodificationexception

List<String> listOfPhones = new ArrayList<String>(Arrays.asList( "iPhone 6S", "iPhone 6",


"iPhone 5", "Samsung Galaxy 4", "Lumia Nokia"));
System.out.println("list of phones: " + listOfPhones);
// Iterating and removing objects from list // This is wrong way, will throw
ConcurrentModificationException
for(String phone : listOfPhones){
if(phone.startsWith("iPhone"))
{ // listOfPhones.remove(phone);
// will throw exception
}
}
// // The Right way, iterating elements using Iterator's remove() method
for(Iterator<String> itr = listOfPhones.iterator(); itr.hasNext();)
{
String phone = itr.next();
if(phone.startsWith("iPhone")){
// listOfPhones.remove(phone); // wrong again
itr.remove(); // right call
}
}
// will throw concurrentModificationException
Iterator<String> it = myList.iterator();
while (it.hasNext()) {
String value = it.next();
System.out.println("List Value:" + value);
if (value.equals("3"))
myList.remove(value);// will throw concurrentModificationException
}
//it will not throw any exception because we are modifying existing element. We are not
modifying the size.
Iterator<String> it1 = myMap.keySet().iterator();
while (it1.hasNext()) {
String key = it1.next();
if (key.equals("2")) {
myMap.put("1", "4");//will not throw any exception
// myMap.put("4", "4"); //will throw exception
}
}

https://www.java67.com/2015/10/how-to-solve-concurrentmodificationexception-in-java-
arraylist.html#ixzz7xeFnJlKC

https://www.java67.com/2015/10/how-to-solve-concurrentmodificationexception-in-java-
arraylist.html#ixzz7xeD1u24d

Iterator Vs ListIterator:

• The basic difference between Iterator and ListIterator is that both being cursor, Iterator can
traverse elements in a collection only in forward direction. On the other hand, the ListIterator
can traverse in both forward and backward directions.
• Using iterator you can not add any element to a collection. But, by using ListIterator you can add
elements to a collection.
• Using Iterator, you can not remove an element in a collection where, as You can remove an
element from a collection using ListIterator.
• Using Iterator you can traverse all collections like Map, List, Set. But, by ListIteror you can
traverse List implemented objects only.
• You can retrieve an index of an element using Iterator. But as List is sequential and index-based
you can retrieve an index of an element in using ListIterator.
Java8 features:

Functional programming uses Declarative style of programming

Imperative: embraces object mutability. lot of problems with multi-threading.

Declarative style of programming embraces object immutability

Lambda:

Equivalent to function (method without name)

They are referred as Anonymous functions.

They can be assigned to a variable and passed to a method

They are not tied to any class.

Lambda Restrictions:

1.Not allowed to use the same local variable name as Lambda parameters

2.Not allowed to reassign a value to local variable

3.No restrictions on instance variable.

4.Lambdas are allowed to declare local variables, but not allowed to modify even though they are not
declared as final. This concept is Effectively Final. To make sure it is easy for performing concurrency
operations

int localLamdbdaVar = 10;

BookInterface bookInterface = (catefory) -> {

// some logic to get books count from Library for a Category.

return localLamdbdaVar;

};

If we try to change the value either in the lambda itself or elsewhere in the enclosing scope, we will get
an error.

No keyword is required to declare a variable as an effective final.

Enclosed variables can be used at any place in Lambda but the value can not be changed.

Method references:

1.To Simplify the implementation of functional interfaces


2.Short cut for writing Lambda expressions.

3.Method References -->Code reusability

Method Reference is alternative syntax for lambda expression

2 methods(referring, Referrer) should have same arguments (return type, access modfier(static,non
static) and name can varey)

TEST::m1()-->Static refrence

object::m1()-->method reference

Interface i= Test:New-->Constructor reference

Functional Interfaces:

1. Consumer(forEach)

the Consumer accepts a generified argument and returns nothing

Consumer<String> r=(a)->{
System.out.println(a);
};
r.accept("hi");
BiConsumer: accepts 2 values.

2. Suppiler (Collectors.toList())

The Supplier functional interface is yet another Function specialization that does not take any
arguments. We typically use it for lazy generation of values.

Supplier<String> s=()->{
return "hello";
};
System.out.println(s.get());
3. Function (map)

Function interface with a method that receives one value and returns another.

public interface Function<T, R> { … }

BiFunction<Integer,Integer,Integer> f=(a,b)->{
return a+b;
};
System.out.println(f.apply(1, 2));
4.Predicate(filter): a predicate is a function that receives a value and returns a boolean value
BiPredicate will accept two values.
BiPredicate<Integer,Integer> p=(a,b)->{
return a>b;
};
System.out.println(p.test(1, 2));
5. Operators: receive and return the same value type

UnaryOperator-->Input and return always same then go with Unary Operator rather than Function.

UnaryOperator is a child Interface of Function

UnaryOperator<Integer> o =i->i*i*i
o.apply(10);
Binary Operator will accept 2 values.

BinaryOperator-->if it takes same type of values as input and return type then go with BinaryOperator

BinaryOperator<String> o=(s1,s2)->s1+s2;

To resolve performance issues (Auto Boxing, Unboxing) they have given special Interfaces for Primitives

IntPredicate

IntFunction, ToIntFunction

IntConsumer,LongCOnsumer

IntSupplier

IntUnaryOperator

https://stackify.com/streams-guide-java-8/

2. Dates

LocalDate, LocalTime, LocalDateAndTime


LocalDate date=LocalDate.now();
date.plusMonths(6);
date.getDayOfMOnth();
date.getYear();

LOcalTime -->time.getMinute();
time.getSecond();
LocalDateAndTime time1=LocalDateAndTime.of("19189","12","12");-->For locating perticular
period.
LocalDateAndTime time1=LocalDateAndTime.now();
Period p=Period.between(time1,tim2);
p.getYears();p.getMonths();p.getDays();

ZoneId,-->for locating perticular zone


ZoneId id=ZoneId.of("America/LosAngels")
ZonedDateTime zt=ZonedDateTime.now(id);
S.O.P(zt);

Year year=Year.of("1973")
year.isLeaf();
convert util date to LocalDate.
Date d=new Date();
LocalDate ld=d.toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
Convert LocalDate to UtilDate
Date d1=
Date.from(ld.atTime(LocalTime.now()).atZone(ZoneId.systemDefault()).toInstant());

String str="13:00";
LocalTime lt=LocalTime.parse(str);

String str1="13*00";
DateTimeFormatter dtf=DateTimeFormatter.ofPattern("HH*mm");
LocalTime lt1=LocalTime.parse(str1,dtf);

4.Streams

To represent group of objects as a single entity go with Collection.

if you want to process objects from Collection then go with streams

peek()→for debugging Streams.

Map()→

Flatmap→

distinct(), count(), sorted().

5. forEach() vs forEachOrdered()

forEachOrdered--Guarantee order even in case of parallel stream


6.short-circuiting operations:

skip() to skip first 3 elements, and limit() to limit to 5 elements from the infinite stream generated using
iterate().

List<Integer> collect = infiniteStream


.skip(3)
.limit(5)
.collect(Collectors.toList());

boolean allEven = intList.stream().allMatch(i -> i % 2 == 0);


boolean oneEven = intList.stream().anyMatch(i -> i % 2 == 0);
boolean noneMultipleOfThree = intList.stream().noneMatch(i -> i % 3 == 0);

7.Specialized Operations

sum(), average(), range() etc.

Double avgSal = empList.stream()


.mapToDouble(Employee::getSalary)
.average()

using map() instead of mapToInt() returns a Stream<Integer> and not an IntStream

8.Reduction Operations

findFirst(), min() and max().

Double sumSal = empList.stream()


.map(Employee::getSalary)
.reduce(0.0, Double::sum);

9.joining

String empNames = empList.stream()


.map(Employee::getName)
.collect(Collectors.joining(", "))
.toString();

Vector<String> empNames = empList.stream()


.map(Employee::getName)
.collect(Collectors.toCollection(Vector::new));
10. Streams API factory methods

Stream.of(“A”,”B”);

infinite streams:

iterate:

Stream<Integer> evenNumStream = Stream.iterate(2, i -> i * 2); //2, 4, 8, 16, 32

generate;

Stream.generate(Math::random)
.limit(5)
.forEach(System.out::println);

10. Streams API Numeric Streams

IntStream i=IntStream.rangeClosed(1, 50);


//i.forEach((a)->System.out.println(a));
System.out.println(i.max());
//System.out.println(i.count());

boxed()→for converting primitive to Wrapper

mapToInt(Integer:intValue)→for converting wrapper to primitive

mapToObj()→

IntStream.range(1, 2).mapToObj((j)->{

return new Integer(j);

});

//convert String to Stream of Strings


String testString="ABC";
Stream<String> stringStream = testString.codePoints().mapToObj(c ->
String.valueOf((char) c));

//convert IntStream to stream of Strings


IntStream nums = IntStream.of(10, 20, 30);
Stream<String> stream = nums.mapToObj(i -> String.valueOf(i));

//convert to ODD,EVEN,ZERO
IntStream st=IntStream.of(0,1,2,5,7,8);
List<String> al=new ArrayList<String>();
st.forEach(j->{
if(j==0) {
al.add("ZERO");
}else if(j%2==0) {
al.add("EVEN");
}else if(j%2!=0) {
al.add("ODD");
}
});
al.forEach(i->System.out.println(i));

//convert int to String stream


IntStream st1=IntStream.of(0,1,2,5,7,8);
List<String> l=st1.mapToObj(i->String.valueOf(i)).collect(Collectors.toList());
l.forEach(i->System.out.println(i));

Optional:

1.to represent not null values.

2.avoids Null Pointer Exception and Unnecessary null checks.

Optional<String> op=Optional.ofNullable("Hello");
op.ifPresent(s->System.out.println(s));
if(op.isPresent()) {
op.get();
}

Hashmap enhancements in java8 :

But Java 8 has come with the following new strategy for HashMap objects in case of high collisions.

To address this issue, Java 8 hash elements use balanced trees instead of linked lists after a certain
threshold is reached.

Which means HashMap starts with storing Entry objects in a linked list but after the number of items in
a hash becomes

larger than a certain threshold. The hash will change from using a linked list to a balanced tree.
Above changes ensure the performance of O(log(n)) in worst case scenarios and O(1) with proper
hashCode().

Node<K,V>[] table;

CompletableFuture in Java8:

https://www.callicoder.com/java-8-completablefuture-tutorial/

Java9 features:

Collections Factory methods:

List<String> list=Arrays.asList("Hi","Hello");
//it will give unmodifiable View. It will not be having separate storage
List<String> listView =Collections.unmodifiableList(list);

list.set(1, "test");
System.out.println(listView);//Hi Test

//Factory methods for the collections


//it will return unmodifiable list
List<String> list1= List.of("Hi","Hello", "How r u");
//list1.add("test");//throws UnsupportedOperationException Exception
//System.out.println(list1);

Compact Strings:

Compact String is one of the performance enhancements introduced in the JVM as part of JDK 9. Till JDK
8, whenever we create one String object then internally it is represented as char[], which consist the
characters of the String object.

• Till JDK 8, Java represent String object as char[] because every character in java is of 2 bytes
because Java internally uses UTF-16.

• If any String contains a word in the English language then the character can be represented
using a single byte only, we don’t need 2 bytes for each character. Many characters require 2
bytes to represent them but most of the characters require only 1 byte, which falls under LATIN-
1 character set. So, there is a scope to improve memory consumption and performance.

• Java9 introduced the concept of compact Strings. The main purpose of the compact string is
whenever we create a string object and the characters inside the object can be represented
using 1 byte, which is nothing but LATIN-1 representation, then internally java will create one
byte[]. In other cases, if any character requires more than 1 byte to represent it then each
character is stored using 2 bytes i.e. UTF-16 representation.

• Thats how Java developers changed the internal implementation of String i.e. known as
Compact String, which will improve the memory consumption and performance of String.

Modules:

Java 9 introduces a new level of abstraction above packages, formally known as the Java Platform
Module System (JPMS), or “Modules” for short.

A Module is a group of closely related packages and resources along with a new module descriptor file.

When we create a module, we include a descriptor file that defines several aspects of our new module:

• Name – the name of our module

• Dependencies – a list of other modules that this module depends on

• Public Packages – a list of all packages we want accessible from outside the module

• Services Offered – we can provide service implementations that can be consumed by other
modules

• Services Consumed – allows the current module to be a consumer of a service

• Reflection Permissions – explicitly allows other classes to use reflection to access the private
members of a package

There are four types of modules in the new module system:

• System Modules – These are the modules listed when we run the list-modules command above.
They include the Java SE and JDK modules.

• Application Modules – These modules are what we usually want to build when we decide to use
Modules. They are named and defined in the compiled module-info.class file included in the
assembled JAR.

• Automatic Modules – We can include unofficial modules by adding existing JAR files to the
module path. The name of the module will be derived from the name of the JAR. Automatic
modules will have full read access to every other module loaded by the path.

• Unnamed Module – When a class or JAR is loaded onto the classpath, but not the module path,
it’s automatically added to the unnamed module. It’s a catch-all module to maintain backward
compatibility with previously-written Java code.

When we create a module, we include a descriptor file that defines several aspects of our new module:
• Name – the name of our module

• Dependencies – a list of other modules that this module depends on

• Public Packages – a list of all packages we want accessible from outside the module

• Services Offered – we can provide service implementations that can be consumed by other
modules

• Services Consumed – allows the current module to be a consumer of a service

• Reflection Permissions – explicitly allows other classes to use reflection to access the private
members of a package

Java10 features:

1.LVT (Local Variable Type Inference):

Type Inference in Java7: The compiler will look at LHS.

List<String> al=new ArrayList<>();

Type inference refers to the automatic detection of the datatype of a variable, done generally at the
compiler time.

Local variable type inference is a feature in Java 10 that allows the developer to skip the type
declaration associated with local variables (those defined inside method definitions, initialization blocks,
for-loops, and other blocks like if-else), and the type is inferred by the JDK. It will, then, be the job of the
compiler to figure out the datatype of the variable.

In below example, the type of message would be String.

Note that this feature is available only for local variables with the initializer. It cannot be used for
member variables, method parameters, return types, etc – the initializer is required as without which
compiler won’t be able to infer the type.

This enhancement helps in reducing the boilerplate code.

public class Java10LVT {


//LVT in static block
static
{
var x = "Hi there";
System.out.println(x);
}

//As a return value in a method


int returnAnyNumber()
{
var num = 100;
return num;
}

public static void main(String[] args) {


var message = "Hello, Java 10";
System.out.println(message);
// Declaring iteration variables in enhanced for loops using LVTI in Java
int[] arr = {1,2,3};
for (var x : arr)
System.out.println(x + "\n");
Java10LVT lvt=new Java10LVT();
System.out.println(lvt.returnAnyNumber());
//Map<Integer, String> map = new HashMap<>();
//This also helps to focus on the variable name rather than on the variable type
var idToNameMap = new HashMap<Integer, String>();
}

There are cases where declaration of local variables using the keyword ‘var’ produces an error

a. Not permitted in class fields

b. Not permitted for uninitialized local variables

var x; /* error: cannot use 'var' on variable without initializer*/

c. Not allowed as parameter for any methods

void show(var a) /*Error: can't use 'var' on method parameters*/

d. Not permitted in method return type

public var show() /* Error: Method return type can't be var*/

e. Not permitted with variable initialized with ‘NULL’

var x = NULL; // Error: variable initializer is 'null'

Application Class-Data Sharing:

Application Class-Data Sharing, or AppCDS, builds upon the Class-Data Sharing (CDS) feature that has
been part of the Java HotSpot VM since Java 5. Initially, CDS was designed to reduce the startup time of
Java applications by sharing common class metadata across different Java processes. However, this
functionality was limited to the JDK’s system classes.
Java 10 expanded this feature with the introduction of AppCDS, allowing application classes to also be
placed in the shared archive. This had the dual effect of improving startup time and reducing the
memory footprint of Java applications.

The Benefits:

By sharing common class metadata between different Java processes and placing application classes
into the shared archive, the overhead of class loading is significantly reduced. This directly improves the
startup time of applications, which is crucial for large-scale applications and services that require
frequent restarts or have many short-lived tasks.

Moreover, AppCDS reduces the memory footprint of the JVM. This is especially beneficial in
containerized and cloud environments where resources are shared, and efficiency is critical.

Using Application Class-Data Sharing:

To leverage AppCDS, two steps are needed:

1. Create a list of classes that need to be included in the shared archive. You can generate this list
by running your application with the following JVM argument: -
XX:DumpLoadedClassList=<filename>.

2. Create the shared archive using the list of classes. This is done by using the -Xshare:dump JVM
argument.

Example:

Here’s how you can create the class list and shared archive:

# Step 1: Create the class list java -XX:DumpLoadedClassList=myapp.lst -cp myapp.jar # Step 2: Create
the shared archive java -Xshare:dump -XX:SharedClassListFile=myapp.lst -
XX:SharedArchiveFile=myapp.jsa -cp myapp.jar

And to use the shared archive when running your application:

java -Xshare:on -XX:SharedArchiveFile=myapp.jsa -cp myapp.jar

Java11 features:

New String Methods:

Java 11 adds a few new methods to the String class

String multilineString = "Hi \n \n Hello \n Developers \n welcome.";


List<String> lines = multilineString.lines()
.filter(line -> !line.isBlank())
.map(String::strip)
.collect(Collectors.toList());
System.out.println(lines);
String text=" ";
System.out.println(text.isBlank()); //true

String name="Fayaz";
System.out.println(name.repeat(3)); //FayazFayazFayaz

String str=" Hello \u2001";


System.out.println(name.trim().length());
System.out.println(str.strip().length()); //5 remove while space char

The new HTTP client from the java.net.http package was introduced in Java 9. It has now become a
standard feature in Java 11.

The new HTTP API improves overall performance and provides support for both HTTP/1.1 and HTTP/2:

HttpClient httpClient = HttpClient.newBuilder()


.version(HttpClient.Version.HTTP_2)
.connectTimeout(Duration.ofSeconds(20))
.build();
HttpRequest httpRequest = HttpRequest.newBuilder()
.GET()
.uri(URI.create("http://localhost:" + port))
.build();
HttpResponse httpResponse = httpClient.send(httpRequest,
HttpResponse.BodyHandlers.ofString());

Java 12 new features:

String Class New Methods:

indent adjusts the indentation of each line based on the integer parameter. If the parameter is greater
than zero, new spaces will be inserted at the beginning of each line. On the other hand, if the parameter
is less than zero, it removes spaces from the begging of each line. If a given line does not contain
sufficient white space, then all leading white space characters are removed.

String multilineString = "Hi \n \n Hello \n Developers \n welcome.";


System.out.println(multilineString.indent(2));

Switch Expressions:

switch statements are not only more compact and readable. They also remove the need for break
statements. The code execution will not fall through after the first match.

Another notable difference is that we can assign a switch statement directly to the variable. It was not
possible previously. It’s also possible to execute code in switch expressions without returning any value:
DayOfWeek dayOfWeek = LocalDate.now().getDayOfWeek();
String typeOfDay = "";
typeOfDay = switch (dayOfWeek) {
case MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY -> "Working Day";
case SATURDAY, SUNDAY -> "Day Off";

};
System.out.println(typeOfDay);

Java13 new features:

Text Blocks:

Earlier, to embed JSON in our code, we would declare it as a String literal:

String JSON_STRING = "{\r\n" + "\"name\" : \"Baeldung\",\r\n" + "\"website\" :


\"https://www.%s.com/\"\r\n" + "}";

With Java13:

String TEXT_BLOCK_JSON = """


{
"name" : "Baeldung",
"website" : "https://www.%s.com/"
}
""";

Switch expression with yield:

Using yield, we can now effectively return values from a switch expression:

var me = 4;
var operation = "squareMe";
var result = switch (operation) {
case "doubleMe" -> {
yield me * 2;
}
case "squareMe" -> {
yield me * me;
}
default -> me;
};
System.out.println(result);
Java14 features:

1.switch expressions have been standardized so that they are part and parcel of the development kit

String day = "FRIDAY";


boolean isTodayHoliday;
switch (day) {
case "MONDAY":
case "TUESDAY":
case "WEDNESDAY":
case "THURSDAY":
case "FRIDAY":
isTodayHoliday = false;
break;
case "SATURDAY":
case "SUNDAY":
isTodayHoliday = true;
break;
default:
throw new IllegalArgumentException("What's a " + day);
}

System.out.println(isTodayHoliday);

2. text blocks now have two new escape sequences:

• \: to indicate the end of the line, so that a new line character is not introduced

• \s: to indicate a single space

3. Records were introduced to reduce repetitive boilerplate code in data model POJOs. They simplify
day to day development, improve efficiency and greatly minimize the risk of human error.

As we can see, we are making use of a new keyword, record, here. This simple declaration will
automatically add a constructor, getters, equals, hashCode and toString methods for us.

public record User(int id, String password) { };


public static void main(String[] args) {
User user1 = new User(0, "UserOne");
user1.password();
user1.id();
System.out.println(user1.id());
}
4.Helpful NullPointerExceptions:

int[] arr = null;


arr[0] = 1;

java.lang.NullPointerException: Cannot store to int array because "a" is null

Java15 features:

Sealed classes:

The goal of sealed classes is to allow individual classes to declare which types may be used as sub-types

This also applies to interfaces and determining which types can implement them.

we can only restrict a class from being extended using final keyword. Sealed classes can control which
classes can extend it by including them in the permitted list.
public abstract sealed class Person permits Employee, Manager { //... }

In this example, Shape is a sealed class, and Circle and Square are its permitted subclasses.

sealed interface Shape permits Circle, Square {


public double area();
}

non-sealed class Circle implements Shape {


double radius;
public double getRadius() {
return radius;
}
public void setRadius(double radius) {
this.radius = radius;
}
@Override
public double area() {
return Math.PI * radius * radius;
}

final class Square implements Shape {


double length;
public double getLength() {
return length;
}
public void setLength(double length) {
this.length = length;
}
@Override
public double area() {
return length*length;
}
}

It’s important to note that any class that extends a sealed class must itself be declared sealed, non-
sealed, or final. This ensures the class hierarchy remains finite and known by the compiler.

This finite and exhaustive hierarchy is one of the great benefits of using sealed classes.

• sealed – meaning they must define what classes are permitted to inherit from it using
the permits keyword.

• final – preventing any further subclasses

• non-sealed – allowing any class to be able to inherit from it.

Alternatively, If we define permitted subclasses in the same file as the sealed class, then we can omit the
‘permits’ clause.

Java16 features:

1.Sealed classes, first introduced in Java 15, provide a mechanism to determine which sub-classes are
allowed to extend or implement a parent class or interface.

There are a few additions to sealed classes in Java 16. These are the changes that Java 16 introduces to
the sealed class:

• The Java language recognizes sealed, non-sealed, and permits as contextual keywords (similar to
abstract and extends)
• Restrict the ability to create local classes that are subclasses of a sealed class (similar to the
inability to create anonymous classes of sealed classes).
• Stricter checks when casting sealed classes and classes derived from sealed classes

2.New Additions to Records in Java 16:

With the release of Java 16, we can now define records as class members of inner classes. This is due to
relaxing restrictions that were missed as part of the incremental release of Java 15

class OuterClass {
class InnerClass {
Book book = new Book("Title", "author", "isbn");
}
}

Java18 features:

1.Exhaustiveness of switch expressions and statements:

The switch expression requires all possible values to be handled in the switch block, else prompts a
compile-time error.

static int coverage(Object o) {


return switch (o) { // Error - not exhaustive
case String s -> s.length();
case Integer i -> i;
};
}

The below code is fine because the default will handle all the possible types.

static int coverage(Object o) {


return switch (o) {
case String s -> s.length();
case Integer i -> i;
default -> 0;
};
}

Foreign Function & Memory API (Second Incubator)

This Foreign Function & Memory API allows the developer to access the code outside the JVM (foreign
functions), data stored outside the JVM (off-heap data), and accessing memory not managed by the JVM
(foreign memory).

Default charset to UTF-8:

In Java 18, if the file.encoding system property is COMPACT, the JVM uses Java 17 and an earlier
algorithm to choose the default charset.

In Java 18, this JEP makes the default charset to UTF-8. However, we still allow configuring the default
charset to others by providing the system property ‘file.encoding’.
Java19 features:

1.Pattern Matching for switch:

Guarded patterns are replaced with when clauses in switch blocks.


Below is a Java pattern matching for switch using the new when as the guarded pattern.The old && was
replaced with when in the guarded pattern.

// new guarded pattern with when


static void testJava19(Object o) {
switch (o) {
case String s
when s.length() > 10 ->
System.out.println("String's length longer than 10!");
case String s ->
System.out.println("String's length is " + s.length());
default -> {}
}
}

2.Virtual Threads:

This JEP introduces virtual threads, a lightweight implementation of threads provided by the JDK instead
of the OS. The number of virtual threads can be much larger than the number of OS threads. These
virtual threads help increase the throughput of the concurrent applications.

In Java, every instance of java.lang.Thread is a platform thread that runs Java code on an underlying OS
thread. The number of platform threads is limited to the number of OS threads, like in the above case
study.

A Virtual Thread is also an instance of java.lang.Thread, but it runs Java code on the same OS thread and
shares it effectively, which makes the number of virtual threads can be much larger than the number of
OS threads.

Because the number of OS threads does not limit virtual threads, we can quickly increase the concurrent
requests to achieve higher throughput.

// finish within 1 second


try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 10_000).forEach(i -> {
executor.submit(() -> {
Thread.sleep(Duration.ofSeconds(1));
return i;
});
});
}
3.Structured Concurrency APIs to simplify multithreaded programming.

Java 20 features:

1.Added support for record patterns to be usable in the header of an enhanced for loop.

record Employee(String id, String name) {}

public class Java21features2 {

public static void main(String[] args) {


Employee emp1 = new Employee("1", "Fayaz");
Employee emp2 = new Employee("2", "Ravi");

var list=new ArrayList<Employee>();


list.add(emp1);
list.add(emp2);
for(Employee p:list){
System.out.println(p.id() + " "+ p.name());
}

2. Structured Concurrency using “withLock()”

DatabaseConnection connection = ...;

withLock(connection, () -> {
// Execute the code block here.
});

Java 21 features:

1.String Templates:

simplify the process of string formatting and manipulation in Java.

// As of Java 21
String productName = "Widget";
double productPrice = 29.99;
boolean productAvailable = true;

String productInfo = `Product: ${productName}


Price: $${productPrice}
Availability: ${productAvailable ? "In Stock" : "Out of Stock"}`;

System.out.println(productInfo);

2. Sequenced Collections

Sequenced Collections introduces three new interfaces: SequencedSet, SequencedCollection, and


SequencedMap. These interfaces come with additional methods that provide improved access and
manipulation capabilities for collections.

List<Integer> arrayList = new ArrayList<Integer>();


arrayList.add(1); // [1]
arrayList.addFirst(0); // [0, 1]
arrayList.addLast(2); // [0, 1, 2]
arrayList.getFirst(); // 0
arrayList.getLast(); // 2

3. Structured concurrency:

The structured concurrency feature aims to simplify Java concurrent programs by treating multiple tasks
running in different threads (forked from the same parent thread) as a single unit of work. Treating all
such child threads as a single unit will help in managing all threads as a unit; thus, canceling and error
handling can be done more reliably.

In structured multi-threaded code, if a task splits into concurrent subtasks, they all return to the same
place i.e., the task’s code block. This way, the lifetime of a concurrent subtask is confined to that
syntactic block.

In this approach, subtasks work on behalf of a task that awaits their results and monitors them for
failures. At run time, structured concurrency builds a tree-shaped hierarchy of tasks, with sibling
subtasks being owned by the same parent task. This tree can be viewed as the concurrent counterpart
to the call stack of a single thread with multiple method calls.
try (var scope = new StructuredTaskScope.ShutdownOnFailure()()) {
Future<AccountDetails> accountDetailsFuture = scope.fork(() -> getAccountDetails(id));
Future<LinkedAccounts> linkedAccountsFuture = scope.fork(() -> fetchLinkedAccounts(id));
Future<DemographicData> userDetailsFuture = scope.fork(() -> fetchUserDetails(id));
scope.join(); // Join all subtasks
scope.throwIfFailed(e -> new WebApplicationException(e));
//The subtasks have completed by now so process the result
return new Response(accountDetailsFuture.resultNow(),
linkedAccountsFuture.resultNow(),
userDetailsFuture.resultNow());
}

unnamed/unused variables:

It is common in some other programming languages (such as Scala and Python) that we can skip naming
a variable that we will not use in the future. Now, since Java 21, we can use the unnamed/unused
variables in Java as well.

Unused variables:

String s = “hello”;

try {
int i = Integer.parseInt(s);
//use i
} catch (NumberFormatException _) {
System.out.println("Invalid number: " + s);
}

Object obj = 1; // Use Object type to accommodate different types

String result = switch (obj) {

case Byte _, Short _, Integer _, Long _ -> "Input is a Number";


case Float _, Double _ -> "Input is a floating-point number";
case String _ -> "Input is a string";
default -> "Object type not expected";
};

System.out.println(result);

//unnamed variables
public void print(Object o) {
switch (o) {

case Point(int x, int _) -> System.out.printf("The x position is :


%d%n", x); // Prints only x
//...
}
}

References:

https://projectreactor.io/docs/core/release/reference/

https://www.baeldung.com/java-reactor-flux-vs-mono

https://vinsguru.medium.com/java-reactive-programming-flux-vs-mono-c94316b55f36

https://www.programmr.com/blogs/difference-between-asynchronous-and-non-blocking

https://vinsguru.medium.com/java-reactive-programming-schedulers-359b5918aadd

https://vinsguru.medium.com/java-reactive-programming-flux-create-vs-flux-generate-38a23eb8c053

https://vinsguru.medium.com/java-reactive-programming-flux-vs-mono-c94316b55f36

https://docs.oracle.com/javase/specs/jvms/se8/html/jvms-2.html#jvms-2.5.4

https://www.linkedin.com/pulse/java-virtual-machine-changes-78-9-kunal-saxena

https://www.freecodecamp.org/news/jvm-tutorial-java-virtual-machine-architecture-explained-for-
beginners/

http://karunsubramanian.com/websphere/one-important-change-in-memory-management-in-java-8/

https://stackify.com/streams-guide-java-8/

https://mkyong.com/java/what-is-new-in-java-19/

You might also like