Advance Devops Lab - Manual
Advance Devops Lab - Manual
Advance Devops Lab - Manual
LAB OBJECTIVES:
To understand the benefits of Cloud Infrastructure and setup AWS CLOUD 9 IDE, Launch
Cloud9 IDE
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about Cloud9 IDE to meet the
application requirement.
PROCEDURE:
1. Sign in to your AWS account at https://aws.amazon.com with an IAM user role that has the
necessary permissions.
2. Use the region selector in the navigation bar to choose the AWS Region where you want to
deploy AWS Cloud9.
3. Select the key pair that you created earlier. In the navigation pane of the Amazon EC2
console, choose Key Pairs, and then choose your key pair from the list
2. On the Select Template page, keep the default setting for the template URL, and then
choose Next.
3. Monitor the status of the stack. When the status is CREATE_COMPLETE, the AWS Cloud9
instance is ready.
Upon successful completion of the CloudFormation stack, you will be able to navigate to the
AWS Cloud9 Console and log in to your new instance.
Conclusion:
Thus the Concept of AWS Cloud 9 IDE is studied and verified for the application.
EX.2 CREATE A SIMPLE PIPELINE (S3 BUCKET)
LAB OBJECTIVES:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about to create a simple
pipeline to meet the application requirement.
PROCEDURE:
You can store your source files or applications in any versioned location. In this tutorial, you
create an S3 bucket for the sample applications and enable versioning on that bucket.
To create an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. Choose Create bucket.
3. In Bucket name, enter a name for your bucket (for example, awscodepipeline-demobucket-
example-date).
4. After the bucket is created, a success banner displays. Choose Go to bucket details.
5. On the Properties tab, choose Versioning. Choose Enable versioning, and then choose
Save.
When versioning is enabled, Amazon S3 saves every version of every object in the bucket.
6. On the Permissions tab, leave the defaults. For more information about S3 bucket and
object permissions, see Specifying Permissions in a Policy.
Step 2: Create Amazon EC2 Windows instances and install the CodeDeploy agent
To create an instance role
1. Sign in to the AWS Management Console and open the CodePipeline console at
http://console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, Getting started page, or the Pipelines page, choose Create
pipeline.
3. Leave the settings under Advanced settings at their defaults, and then choose Next
Under Change detection options, leave the defaults. This allows CodePipeline to use Amazon
CloudWatch Events to detect changes in your source bucket.
Choose Next.
4. In Step 4: Add deploy stage, in Deploy provider, choose AWS CodeDeploy. The Region field
defaults to the same AWS Region as your pipeline. In Application name, enter
MyDemoApplication, or choose the Refresh button, and then choose the application name from
the list. In Deployment group, enter MyDemoDeploymentGroup, or choose it from the list, and
then choose Next.
5. In Step 5: Review, review the information, and then choose Create pipeline.
6. The pipeline starts to run. You can view progress and success and failure messages as the
CodePipeline sample deploys a webpage to each of the Amazon EC2 instances in the
CodeDeploy deployment.
The following page is the sample application you uploaded to your S3 bucket.
CONCLUSION:
Thus the creation of S3 Bucket is done and the relevant application is verified.
LAB Objectives:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about to create a cluster with
kubeadm to meet the application requirement.
PROCEDURE:
Instructions
Note:
If you have already installed kubeadm, run apt-get update && apt-get upgrade or yum update to
get the latest version of kubeadm.
Make a record of the kubeadm join command that kubeadm init outputs. You need this command
to join nodes to your cluster.
The token is used for mutual authentication between the control-plane node and the joining
nodes. The token included here is secret. Keep it safe, because anyone with this token can add
authenticated nodes to your cluster. These tokens can be listed, created, and deleted with
the kubeadm token command. See the kubeadm reference guide.
Caution:
By default, kubeadm sets up your cluster to use and enforce use of RBAC (role based
access control). Make sure that your Pod network plugin supports RBAC, and so do any
manifests that you use to deploy it.
If you want to use IPv6--either dual-stack, or single-stack IPv6 only networking--for your
cluster, make sure that your Pod network plugin supports IPv6. IPv6 support was added
to CNI in v0.6.0.
Note:
The example above assumes SSH access is enabled for root. If that is not the case, you can copy
the admin.conf file to be accessible by some other user and scp using that other user instead.
The admin.conf file gives the user superuser privileges over the cluster. This file should be used
sparingly.
CONCLUSION:
Thus Creating a cluster with kubeadm is done and the related work is verified for application.
EX.4 INSTALL AND SET UP KUBECTL ON WINDOWS
LAB OBJECTIVES:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about kubectl to meet the
application requirement.
PROCEDURE:
Note: To find out the latest stable version (for example, for
Using Command Prompt to manually compare CertUtil's output to the checksum file
downloaded:
Note: Docker Desktop for Windows adds its own version of kubectl to PATH. If you have
installed Docker Desktop before, you may need to place your PATH entry before the one added
by the Docker Desktop installer or remove the Docker Desktop's kubectl.
1. To install kubectl on Windows you can use either Chocolatey package manager or Scoop
command-line installer.
o choco
o scoop
7. cd ~
9. mkdir .kube
11. cd .kube
In order for kubectl to find and access a Kubernetes cluster, it needs a kubeconfig file, which is
created automatically when you create a cluster using kube-up.sh or successfully deploy a
Minikube cluster. By default, kubectl configuration is located at ~/.kube/config.
Install kubectl convert plugin
A plugin for Kubernetes command-line tool kubectl, which allows you to convert manifests
between different API versions. This can be particularly helpful to migrate manifests to a non-
deprecated api version with newer Kubernetes release. For more info, visit migrate to non
deprecated apis
Thus the installing and set up of kubectl on Windows is done and verified.
LAB OBJECTIVES:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about working with
Terraform
PROCEDURE:
Install Terraform
Refer to the official download page to get the latest version for the respective OS.
terraform_0.13.0_linux_amd64.zip
inflating: terraform
Move the terraform executable file to the path shown below. Check the terraform version.
Terraform v0.13.0
You can see these are the available commands in terraform for execution.
geekflare@geekflare:~$ terraform
Usage: terraform [-version] [-help] <command> [args]
Common commands:
apply Builds or changes infrastructure
workspace Workspace management
In this demo, I am going to launch a new AWS EC2 instance using Terraform.
Go to the directory and create a terraform configuration file where you define the provider and
resources to launch an AWS EC2 instance.
Note: I have changed the access and secret keys 😛, you need to use your own.
From the configuration mentioned above, you can see I am mentioning the provider like AWS.
Inside the provider, I am giving AWS user credentials and regions where the instance must be
launched.
You can see how easy and readable the configuration file is, even if you are not a die-hard coder.
terraform init
Now, the first step is to initialize terraform.
terraform plan
Next is the plan stage; it will create the execution graph for creating and provisioning the
infrastructure.
terraform apply
The apply stage will execute the configuration file and launch an AWS EC2 instance. When you
run apply command, it will ask you, “Do you want to perform these actions?”, you need to type
yes and hit enter.
Finally, if you want to delete the infrastructure, you need to run the destroy command.
If you recheck the EC2 dashboard, you will see the instance got terminated.
CONCLUSION:
Thus the installation of terraform and the working of terraform is studied and verified.
LAB OBJECTIVES:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about deploying
infrastructure with an approval job using Terraform
PROCEDURE:
I have found that the easiest way to install Terraform CLI is to download the prebuilt binary for
your platform from the official download page and move it to a folder that is currently in
your PATH environment variable.
Note: Terraform offers extensive documentation about installation, if you would like to try a
different method.
On Linux, for example, installing Terraform is as easy as running
Confirm that it was installed correctly by running:
terraform version
Our first task is to create a project on Google Cloud to store everything related to Terraform
itself, like the state and service accounts. Using the gcloud CLI enter:
link $TERRAFORM_PROJECT_IDENTIFIER \
--billing-account [billing-account-id]
Remember to replace [billing-account-id] with the actual ID of your billing account. If you do
not know what ID to use, the easiest way to get it is to run:
Your ID is the first column of the output. Use the ID whose OPEN status (the third column)
is TRUE.
Next, enable the required APIs on the Terraform project:
cloudresourcemanager.googleapis.com \
cloudbilling.googleapis.com \
compute.googleapis.com \
iam.googleapis.com \
serviceusage.googleapis.com \
container.googleapis.com
Adding roles
Next, we need to create roles so Terraform can store their state inside a storage bucket we will
create later on. In this step, we will add the viewer and storage.admin roles to the service account
inside our Terraform Google Cloud project:
--member serviceAccount:$TERRAFORM_SERVICE_ACCOUNT_EMAIL \
--role roles/viewer
Note that we are using the commands that we used for the previous project, but we are using the
name circleci-k8s-cluster. Another difference is that we are not setting this project as the default
for the gcloud CLI.
Put the identifier in a variable just like we did before:
export CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER=circleci-k8s-cluster-
$RANDOM_ID
Just like the Terraform project, the new project must be linked to your billing account. Run:
link $CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER \
--billing-account [billing-account-id]
--member serviceAccount:$TERRAFORM_SERVICE_ACCOUNT_EMAIL \
--role roles/owner
For this tutorial, we will deploy a simple Kubernetes Cluster to Google Cloud. The first thing we
need to do is to create our repository on GitHub and initialize a local repository in our machine
that points to this repo. Name your project circleci-terraform-automated-deploy.
With the GitHub CLI this is as easy as running:
cd circleci-terraform-automated-deploy
|- backend.tf
|- k8s-cluster.tf
|- main.tf
|- outputs.tf
|- variables.tf
export GOOGLE_APPLICATION_CREDENTIALS=~/gcloud-terraform-admin.json
Check that Terraform is able to authenticate with Google Cloud to create the initial state. In your
git repository, run:
terraform init
In addition to the version of the Google provider, we are also setting the default values
for project and region. These values will used by default when creating resources. Make sure to
replace [circleci-project-full-identifier] with the actual value. In our case, this is the value of
the $CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER shell variable.
Because we changed a provider, we must also reinitialize our Terraform state. Run:
terraform init
google_container_cluster
google_container_node_pool
After setting the value of the environment variable to the contents of the JSON key file,
click Add Environment Variable.
Now that we have finished creating our context, go back to the projects page in CircleCI (by
clicking on the X icon on the top right). Search for the GitHub repository you created for your
infrastructure. Click Set Up Project.
And then Start Building.
This job may take a while to finish. Once it ends, the infrastructure should be available on
Google Cloud Console and our build would be green.
Success!
When you do not need the infrastructure anymore, you can run (locally):
terraform destroy
Conclusion:
Thus the Deploying infrastructure with an approval job using Terraform is done and
verified.
LAB OBJECTIVES:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about to Integrate Jenkins
SAST to SonarQube – DevSecOps.
PROCEDURE:
SonarQube Setup
SonarQube Instance
Before proceeding with the integration, we will setup SonarQube Instance. Choice of the
platform is yours. In this Tutorial, we are using SonarQube Docker Container.
This will basically tell the sonar scanner to send the analysis data in the project name with the
mentioned project key. Along with this, we are using python Bandit to scan the Python
Dependency vulnerability and more. So, we are adding the report of the same in the proprieties
file.
SonarQube Scanner Plugin for Jenkins
Tool Configuration SonarQube Scanner
Now, we need to configure the Jenkins plugin for SonarQube Scanner to make a connection with
the SonarQube Instance. For that, got to Manage Jenkins > Configure System > SonarQube
Server. Then, Add SonarQube. In this, give the Installation Name, Server URL then Add the
Authentication token in the Jenkins Credential Manager and select the same in the configuration.
SonarQube Server Configuration in Jenkins
Then, we need to set-up the SonarQube Scanner to scan the source code in the various stage. For
the same, go to Manage Jenkins > Global Tool Configuration > SonarQube Scanner. Then,
Click Add SonarQube Scanner Button. From there, give some name of the scanner type
and Add Installer of your choice. In this case, I have selected SonarQube Scanner from Maven
Central.
SonarQube Scanner Configuration for Jenkins
SonarQube Scanner in Jenkins Pipeline
Now, It’s time to integrate the SonarQube Scanner in the Jenkins Pipeline. For the same, we are
going to add one more stage in the Jenkinsfile called sonar-publish and inside that, I am adding
the following code.
Were this will collect the SonarQube Server information from the sonar-project.properties file
and publish the collected information to the SonarQube Server. So, the overall code will look
like the below snippet.
Once we execute the Jenkins Pipeline for this project, we will get the following output
Jenkins Pipeline for SonarQube
Where it will just execute the SonarQube Scanner and collect the SAST information and Python
bandit report in the format of JSON. Then, it will publish the same in the SonarQube Server. If
you login to the SonarQube and visit the Dashboard, you will see the Analysis of the project
there.
Code analysis Result in SonarQube
Since we have both Jenkins and SonarQube in the Enterprise standard, we have a lot of features
including the alert system. Where we can configure the Email, or Instance message Notification
system for the findings in the SonarQube or Jenkins. In the best case, we can auto convert certain
bugs or findings as ticket and assign to the respective developer.
Conclusion:
Thus to Integrate Jenkins SAST to SonarQube – DevSecOps is studied and verified.
EX.8. RUNNING JENKINS AND SONARQUBE ON DOCKER
LAB OBJECTIVES:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about Running Jenkins and
SonarQube on Docker
PROCEDURE:
Enough on the introductions. Let’s jump into the configurations, shall we? First of all, let’s
spin up Jenkins and SonarQube using Docker containers. Note that, we are going to use
docker compose as it is an easy method to handle multiple services. Below is the content of
the docker-compose.yml file which we are going to use.
docker-compose up is the command to run the docker-compose.yml file.
This file, when run, will automatically host the Jenkins listening on port 8080 along with a
slave.
Jenkins hosted using Docker
The SonarQube will be hosted listening on port 9000.
So
narQube hosted using Docker
Configuring Jenkins for SonarQube Analysis
For this, let’s go to Jenkins -> Manage Jenkins -> Manage Plugins. There, navigate to
“Available” view and look for the plugin “SonarQube Scanner”. Select the plugin and
click on “Install without restart” and wait for the plugin to be installed.
Installing SonarQube Scanner Plugin
For that, let’s click on Jenkins -> Manage Jenkins -> Configure System -> SonarQube
Servers and fill in the required details.
SonarQube Server Configuration
Here,
To get the server authentication token, login to SonarQube and go to Administration ->
Security -> Users and then click on Tokens. There, Enter a Token name and click on
Generate and copy the token value and paste it in the Jenkins field and then click on
“Done”.
Creating Authorization Token
Finally, save the Jenkins Global configurations by clicking on the “Save” icon.
Manage Jenkins -> Global Tool Configuration -> SonarQube Scanner -> SonarQube
Scanner installations. Enter any meaningful name under the Name field and select an
appropriate method in which you want to install this tool in Jenkins. Here, we are going to
select “Install automatically” option. Then, click on “Save”.
SonarQube Scanner Configuration in Jenkins
Creating and Configuring Jenkins Pipeline Job
let’s click on “New Item” in Jenkins home page and enter the job name as
“sonarqube_test_pipeline” and then select the “Pipeline” option and then click on “OK”.
Creating Jenkins Pipeline job
Now, inside the job configuration, let’s go to the Pipeline step and select Pipeline Script
from SCM and then select Git and enter the Repository URL and then save the job.
Pipe
line Job Configuration
As shown in the image, the source code is under “develop” branch of the repository
“MEANStackApp”. We have also committed a Jenkinsfile there which will be the input for
our pipeline job.
The Jenkinsfile has the logic to checkout the source code and for SonarQube tool to
perform code analysis on the code. Below is the content of this Jenkinsfile.
Building the Jenkins Pipeline Job
Since we have configured everything, let’s build the job and see what happens. For that,
click on the “Build Now” option in the job.
Building the
Jenkins job
From the logs below, it can be seen that the Jenkins job is successful.
Lo
gs of Jenkins Pipeline Job
Below is the job view in Blue Ocean. Pretty, isn’t it?
Job View in Blue Ocean
To check the analysis report, let’s go to the link as shown in the build logs. The link
basically points to the SonarQube server URL.
SonarQube Analysis Report
Here, it says there are no bugs and vulnerabilities in this code and the Quality Gate status
looks “Passed“. Though it’s a simple app, it is good to know that code quality is good
We have reached the end of this article. Here, we have learned how to integrate SonarQube
with Jenkins for a simple node.js app in order to perform code analysis. The same
procedure can be followed for applications written in any other programming language.
Conclusion:
Thus Running Jenkins and SonarQube on Docker is studied and verified.
LAB OBJECTIVES:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about Continuous Monitoring
by nagios
PROCEDURE:
Open the terminal and use rpm -Uvh command and paste the link.
We need to download one more repository, for that visit the website
‘http://rpms.famillecollet.com/enterprise/‘
Right-click and copy the link location for ‘remi-release-6.rpm‘
Again open the terminal and use rpm -Uvh command and paste the link.
Fine, so we are done with the pre-requisites. Let’s proceed to the next step.
Step – 2: Install Nagios Core, Nagios Plugins And NRPE (Nagios Remote Plugin Executor):
Apache web server is required to monitor the current web server status.
Now, we will enable swap memory of at least 1GB. It’s time to create the swap file itself using
the dd command:
Swap is basically used to free some, not so frequently accessed information from RAM, and
move it to a specific partition on our hard drive.
If we see no errors, our swap space is ready to use. To activate it immediately, type:
swapon /swap
This file will last on the virtual private server until the machine reboots. You can ensure that the
swap is permanent by adding it to the fstab file.
The operating system kernel can adjust how often it relies on swap through a configuration
parameter known as swappiness.
cat /proc/sys/vm/swappiness
Finally, we are done with the second step.
Let’s proceed further and set Nagios password to access the web interface.
Set the password to access the web interface, use the below command:
Here, give the user name and password. By default, the user name is nagiosadmin, and password
is what you have set in the previous step. Finally, press OK.
Firstly, we need to install the required packages like we did on my Nagios server machine. So,
just execute the same commands, consider the below screenshots:
chkconfig nrpe on
Our next step is to edit the, nrpe.cfg file. we will be using the vi editor, you can choose any other
editor also:
we need to add the IP address of your monitoring server, in the allowed host line, consider the
below screenshot:
iptables -N NRPE
/etc/init.d/iptables save
vi /etc/nagios/nagios.cfg
mkdir /etc/nagios/servers/
cd /etc/nagios/servers
Create a new file in this directory with .cfg extension and edit it. we will name it as client.cfg,
and we will be using vi editor.
vi /etc/nagios/servers/client.cfg
Similarly, you can add number of services that you want to monitor. The same configurations
can be used to add ‘n’ number of clients.
Last step, set the folder permissions correctly and restart Nagios.
Conclusion:
LAB OBJECTIVES:
On Successful Completion, the Student will be able to understand about Monitoring Windows
Server with Nagios Core
PROCEDURE:
In this simplified example, we want to install the listener onto the Nagios server, using an
Ubuntu system. Replace {version} with the current version of the NRDP service:
Navigate to the Nagios server and the NRDP listener, such as http://10.0.0.10/nrdp. Use the
token previously retrieved from the authorized_tokens section of the configuration
file /usr/local/nrdp/server/config.inc.php to send the following JSON to test the listener:
The check_ncpa.py plugin enables Nagios to monitor the installed NCPAs on the hosts. Follow
these steps to install the plugin:
1. Download the plugin.
After the NRDP installation, install the NCPA. Download the installation files and run the
install.
For the listener configuration, follow these guidelines:
URL: Use the IP or host name of the Nagios server that hosts the installed NRDP agent.
Conclusion:
Thus the study of monitoring Windows Server with Nagios Core is done and verified.
Lab Objectives:
To understand about the Concepts of Creating a Serverless Workflow with AWS Step Functions
and AWS Lambda
Lab Outcomes:
Know the basic concepts of Creating a Serverless Workflow with AWS Step Functions and AWS
Lambda
PROCEDURE:
we are able to sit down with the call center manager to talk through best practices for handling
support cases. Using the visual workflows in Step Functions as an intuitive reference, we define
the workflow together.
Then, we’ll design our workflow in AWS Step Functions. our workflow will call one AWS
Lambda function to create a support case, invoke another function to assign the case to a support
representative for resolution, and so on.
a. Open the AWS Step Functions console. Select Author with code snippets. In the Name text
box, type CallCenterStateMachine.
d. Click Next.
Step 2. Create an AWS Identity and Access Management (IAM) Role
AWS IAM is a web service that helps us securely control access to AWS resources. In this step,
we will create an IAM role that allows Step Functions to access Lambda.
a. In another browser window, open the AWS Management Console. When the screen loads,
type IAM in the search bar, then select IAM to open the service console.
b. Click Roles and then click Create Role.
( click to enlarge )
( click to enlarge )
b. Click Create function.
c. Select Author from scratch.
Name – OpenCaseFunction.
Runtime – Node.js 4.3.
Role – Create custom role.
g. Replace the contents of the Function code window with the following code, and then
click Save.
h. At the top of the page, click Functions.
When complete, you should have 5 Lambda functions.
( click to enlarge )
c. In the State machine definition section, find the line below the Open Case state which starts
with Resource.
If you click the sample ARN, a list of the AWS Lambda functions in your account will appear
and you can select it from the list.
( click to enlarge )
d. Repeat the previous step to update the Lambda function ARNs for the Assign Case, Work on
Case, Close Case, and Escalate Case Task states in your state machine, then click Save.
( click to enlarge )
( click to enlarge )
b. A New execution dialog box appears. To supply an ID for your support case, enter the content
from below in the New execution dialog box in the Input window, then click Start execution.
{
"inputCaseID": "001"
}
( click to enlarge )
c. As our workflow executes, each step will change color in the Visual workflow pane. Wait a
few seconds for execution to complete. Then, in the Execution details pane,
click Input and Output to view the inputs and results of our workflow.
d. Step Functions lets we inspect each step of our workflow execution, including the inputs and
outputs of each state. Click on each task in our workflow and expand the Input and Output fields
under Step details. We can see that the case ID you inputted into our state machine is passed
from each step to the next, and that the messages are updated as each Lambda function
completes its work.
e. Scroll down to the Execution event history section. Click through each step of execution to see
how Step Functions called your Lambda functions and passed data between functions.
f. Depending on the output of our WorkOnCaseFunction, our workflow may have ended by
resolving the support case and closing the ticket, or escalating the ticket to the next tier of
support. we can re-run the execution a few more times to observe this different behavior. This n
our state machine to loop back to the Work On Case state. No changes to our Lambda functions
would be required. The functions we built for this lab are samples only, so we’ll move on to the
next step.
( click to enlarge )
Important: Terminating resources that are not actively being used reduces costs and is a best
practice. Not terminating our resources can result in a charge.
a. At the top of the AWS Step Functions console window, click State machines.
b. In the State machines window, select the CallCenterStateMachine and click Delete. To
confirm we want to delete the state machine, in the dialog box that appears, click Delete state
machine. our state machine will be deleted in a minute or two, after Step Functions has
confirmed that any in-process executions have completed.
c. Next, we’ll delete your Lambda functions. Click Services in the AWS Management Console
menu, then select Lambda.
d. In the Functions screen, click each of the functions you created for this lab and then
select Actions and then Delete. Confirm the deletion by clicking Delete again.
e. Lastly, we’ll delete your IAM roles. Click Services in the AWS Management Console menu,
then select IAM.
f. Select both of the IAM roles that we created for this lab, then click Delete role. Confirm the
delete by clicking Yes, Delete on the dialog box.
Conclusion:
Thus the study of Creating a Serverless Workflow with AWS Step Functions and AWS Lambda
is done and verified.
EX.12. AMAZON S3 TRIGGER TO INVOKE A LAMBDA FUNCTION
LAB OBJECTIVES:
LAB OUTCOMES:
On Successful Completion, the Student will be able to understand about Amazon S3 trigger to
invoke a Lambda function
PROCEDURE:
Prerequisites
To use Lambda and other AWS services, we need an AWS account. If we do not have an
account, visit aws.amazon.com and choose Create an AWS Account. For instructions, see How
do I create and activate a new AWS account?
This lab assumes that we have some knowledge of basic Lambda operations and the Lambda
console. If we haven't already, follow the instructions in Getting started with Lambda to create
your first Lambda function.
Create an Amazon S3 bucket and upload a test file to our new bucket. our Lambda function
retrieves information about this file when we test the function from the console.
After creating the bucket, Amazon S3 opens the Buckets page, which displays a list of all
buckets in our account in the current Region.
1. On the Buckets page of the Amazon S3 console, choose the name of the bucket that we created.
2. On the Objects tab, choose Upload.
3. Drag a test file from our local machine to the Upload page.
4. Choose Upload.
Use a function blueprint to create the Lambda function. A blueprint provides a sample function
that demonstrates how to use Lambda with other AWS services. Also, a blueprint includes
sample code and function configuration presets for a certain runtime. For this lab, we can choose
the blueprint for the Node.js or Python runtime.
When we configure an S3 trigger using the Lambda console, the console modifies our
function's resource-based policy to allow Amazon S3 to invoke the function.
7. Choose Create function.
The Lambda function retrieves the source S3 bucket name and the key name of the uploaded
object from the event parameter that it receives. The function uses the Amazon S3 getObject API
to retrieve the content type of the object.
While viewing your function in the Lambda console, you can review the function code on
the Code tab, under Code source. The code looks like the following:
Node.js
Python
Invoke the Lambda function manually using sample Amazon S3 event data
For more information on these graphs, see Monitoring functions in the AWS Lambda console.
5. (Optional) To view the logs in the CloudWatch console, choose View logs in CloudWatch.
Choose a log stream to view the logs output for one of the function invocations.
We can now delete the resources that we created for this lab, unless we want to retain them. By
deleting AWS resources that we’re no longer using, we prevent unnecessary charges to your
AWS account.
Conclusion:
Thus the study of Amazon S3 trigger to invoke a Lambda function is done and verified.