GCP

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 15

google cloud compute engine or virtual machines

compute (server based)


----------
1-compute engine
2-kubernete engine
vmware engine
serverless
----
1-app engine
2-cloud run
3-cloud function

search gcp console-then sign in-dashboard-

create vm
-----
login gcp -compute engine - vm instances-compute engine api-enable

create instance-
name-web1
regin-mumbai
zone-asia-south-1a
machine configuration
-----
general purpose -select
series-n1
machine type-n1 standard-a
availability policies-stanadard
display device-enable display device (not req)
boot disk --change-
operating server-window server
version-2022
boot disk type-balanced persistent disk
size-50
advance
---
delete boot disk
google manage encrypted

select

firewal-hhtp and https select

create

create vm with cloudshell


-----------
gcloud compute instances create myvm01 --machine-type n1-standard-2 --zone us-
west4-b

gcloud compute ssh myvm01 --zone=us-west4-b


apt update -y
apt install apache2 -y
vi /var/www/html/index.html
modify
then check with public ip

exit exit
gcloud compute images list
with image you can create the vm
---
gcloud compute instances create vvm2 --image-family fedora-coreos-testing --image-
project fedora-coreos-cloud --zone us-west4-a

gclod compute instances describe vm1 (details about machine)


gcloud compute instances stop vvm1 (stop the instances)
gcloud compute instances start vvm1 (start the vm)
gcloud compute instances list (list of the instances)
gcloud compute instances delete vm1

change the machine ttpe


-----
compute enge-vm instance-create-
name-oldvm
region-
type-n1

create

then stop the instance


then go inside the vm -edit-change the type-N1 to N2
then save
start the machine

cli way to change the type


-------
----
select cloudshell-
gcloud compute instances stop oldvm1
gloud compute instances set-machine-type oldvm1 --machine-type e2-standard-2

create custom type


--------

gcloud compute instances set-machine-type old-vm --machine-type e2-custom-4-2048

gclod compute instances describe


df -h

resize your persistent disk of vm


-----
create virtual machine-
go inside the vm-
open vm on ssh

sudo -i
lsblk
gcloud compute disks resize instance-1 --zone us-west-4b --size 30
lsblk

parted /dev/sda
resizepart
fix
1
yes
100%
quit
lsblk
check sda1

custom image
------
1st create a instance
name-instance-1
http and https allow

connect with ssh


sudo -i
touch file1 file2 file3
ls
apt update -y
apt install apache2 -y
systemctl status apache2
vi /var/www/html/index.html
edit

copy the public ip and check


then go storage-images-create image
name-myimage
source-disk
source disk-instance-1
tick -keep instance running
location-multi-regional-create
go to instance-create instance-
selct the image and create the instances
open the instance through ssh
sudo -i
ls (check file1 file2 file3 is there)
copy the public ip and paste it on google chrome ---its working

image through cloudshell

machine image(v-17)
-----
create a instance-
name-instance-1
type-e2
advance option-disk-add disk
disk source type-blank disk
disk type- balanced persistent disk
size-10
deleteion rule-delete disk
save create
open ssh
sudo su
touch file1 file2 file3
mkdir mydir
apt update
apt install apache2 -y
systemctl status apache2
lsblk

go instance-machine image-create-
name-mymachineimage
source-instance-1
location-regional
create
select the mymachineimage-create instance-
open ssh
sudo su
ls
lsblk

then in cloudshell how to create machine image


---
open cloudshell
gcloud compute machine-images create mycloudshellima --source-instance=instance-1
--source-instance-zone=us-central1-a

then our machine image done


go to machine image check machine image is ready

then with machine image we can create instance


gcloud beta compute instances create vmcloudshell --zone=us-central1-a --source-
machine-image=mycloudshellima

cloud storage
--------
storage types in google cloud
1-cloud storage----------in aws s3 storage
2-persistent disk-----------in aws EBS
3-filestorage-----------in aws EFS(elastic file storage)

google cloud database store


---
1-bigtable
2-firestore(backend as a service)
3-cloud sql
4-cloud datastore
5-cloud spanner
6-memorystore

Create bucket and upload object(l-21)


--------

cloud storage-bucket- create bucket-


name-my1stbucket79-continue
choose where to store-multi-region-continue
choose a storage class for your data-set a default class-standard-contuinue
choose how to control access to object-enfore public access prevention---uniform
choose how to protect object data-soft delete policy
data encryption--google-managed encryption
create
check bucket is ready
open bucket-upload file-browse-open-upload
open the upload file-copy the authenticated url and paste it on google it is
showing
go tp bucket-permissions-grant access-
new prinicipals-allusers(give ur mail [email protected])
role-environment and storage object-save

cloudstorage in cloudshell
------
gsutil mb -b on -l us-east1 gs://rajbuckett1979/

now check rajbuckett1979 is ready

upload
---
in cloushell : upload-upload the file
pwd
ls (file is there)
gsutil cp disk.txt gs://rajbuckett1979

check file upload

to download
--
gsutil cp gs://rajbuckett1979/disk.txt
/home/rajlaxmidas80/myfd/raj.txt

ls
myfd directory is there
cd myfd
ls
raj.txt is there

gsutil ls gs://rajbuckett1979 (list)

public access
--
gsutil iam ch allusers:objectViewer gs://rajbuckett1979

delete permission
---
gsutil iam ch -d allusers:objectViewer gs://rajbuckett1979

delete the bucket


--
gsutil rm -r gs://rajbuckett1979

object versioning and lifecycle management(l-22)


-------
1st create a bucket and upload a file.
go to bucket --protection-object versioning off-confirm

versioning in cloudshell
---------
how to do versioning off in cloud shell
------
gsutil versioning set off gs://bucker568974
how to do versioning on in cloud shell
---
gsutil versioning set on gs://bucker568974

to check versioning status


-----
gsutil versioning get gs://bucker568974

lifecycle management
-----

go to bucket-lifecycle-add rule-set storage class to nearline-continue-


select object condition-age-15days-continue-create

cloudshell
----
1st upload the rule1 file in json format
gsutil lifecycle set 1rule.json gs://bucket40279

create signed URL(l-23)


------

1st create a bucket and then upload a file


then go to google cloud -APIs and services -credentials-create credentials-service
account-
or if you have then use it.
then click n service account (email)-keys-add key-create key-json-create

open cloudshell-upload the keyfile-


sudo su
pip install pyopenssl
gsutil signurl -d 60s key.json gs://mybucket8547/disk.txt

if i select url and click it opened

static websites hosting on cloud storage(l-24)


--------------------

create bucket
name-web-bucket345
open cloud shell
then search google-https://www.html5webtemplates.co.uk/-websites templates-
simplestylebanner-right click on download simplystylebanner-copy the website
in cloudshel put it

command:-
---
wget
https://www.html5webtemplates.co.uk/wp-content/uploads/2020/05/simplestyle_banner.z
ip
ls
mkdir websites
mv simplestyle_banner.zip website
cd website
ls
unzip simplestyle_banner.zip
cd simplestyle_banner/
ls
cd ..
gsutil -m cp -r simplestyle_banner gs://web-bucket789

go to bucket check all the files are there


select the file-permission-grant access-alluser-public storage-alluser
go to bucket-open file-index.html-copy thr url paste in google

create automatic vpc (l-26)


--------
vpc network-vpc networks-create vpc network-
name-myvpc
desc-1st vpc
max-1460
vpc network ULA internal ipv6 range- disabled
subnet-automatic
firewall rules-myvpc allow rdp-myvpc allow ssh
dynamic routing mode-ragional
create

then create instance-instance-1


only change edit network interface
network-myvpc

go to vpc network-myvpc-firewall-add firewall rule-


name-rule-1
description-icmp allow
logs- off
network -myvpc
priority-1000
direction of traffic-ingress
action on match-allow
protocols and ports-specified protocols and ports
other-icmp-create

then ping with other instance

in cloud shell
----
gcloud compute networks create myvpc1 --project=central-cab-410203 --subnet-
mode=auto --mtu=1460 --bgp-routing-mode=regional

gcloud compute firewall-rules create myvpc1-allow-rdp --project=central-cab-410203


--network=projects/central-cab-410203/global/networks/myvpc1 --description=Allows\
RDP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network\ using\
port\ 3389. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --
action=ALLOW --rules=tcp:3389

custom vpc(l-27)
--
automode vpc custom vpc
1.when you create vpc,then vpc and 1-but in custom only vpc created
not subnet
subnet automatically created in every region
2-control is less 2-control is more,in real scenarion it is
used

lab
---
vpc network-vpcnetworks-create vpc network-
name-mycustomvpc
vpc network ULA internal ipv6 range -disabled
subnets -custom
newsubnet
--
name--subnet-east
region-asia-east1
ip stack type- ipv4(single-stack)
ipv4 range-10.0.1.0/24
private google access-off
flow logs -off
done

add subnet-
name-subnet-useast
region-us-east1
ip stack type- IPV4(single-stack)
IPV4 range-10.0.2.0/24
done
Firewall rules
--
1-icmp allow
2-tcp:22 allow
dynamic routing mode-regional
create
then create vm1 on east-asia region
another vm2 on us-east1
ping on vmi ping vm2
its pinging
click equivalent command line copy the command
you can also create in cloud shell
----------------------------
gcloud compute networks create customvpc --project=central-cab-410203 --subnet-
mode=custom --mtu=1460 --bgp-routing-mode=regional

gcloud compute networks subnets create subnet-asiaeast1 --project=central-cab-


410203 --range=10.0.1.0/24 --stack-type=IPV4_ONLY --network=customvpc --
region=asia-east1

gcloud compute networks subnets create subnet-useast1 --project=central-cab-410203


--range=10.0.2.0/24 --stack-type=IPV4_ONLY --network=customvpc --region=us-east1

gcloud compute firewall-rules create customvpc-allow-icmp --project=central-cab-


410203 --network=projects/central-cab-410203/global/networks/customvpc --
description=Allows\ ICMP\ connections\ from\ any\ source\ to\ any\ instance\ on\
the\ network. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --
action=ALLOW --rules=icmp
gcloud compute networks describe mycusomvpc (details)
gcloud compute update mycustomcpc --bgp-routing-mode=Global

then create instance


gcloud compute instances create myvm1 --zone=asia-east1-a --machine-type=n1-
standard-1 --subnet=subnet11

vpc peering(l-28)
------
vpc network-vpc networks-create
name-vpc-usa
maximum-1460
vpc network ULA-disabled
subnets-custom
new subnet
--
name-subnet-usa
region-us-east1
IP stack-IPv4
IPV4 range-10.0.0.0/24
pvt-off
flow logs-off
hybrid subnet-off
allow--- ssh,icmp
dynamic routing mode-regional
create

another vpc
---------
vpc network-vpc networks-create
name-vpc-asia
maximum-1460
vpc network ULA-disabled
subnets-custom
new subnet
--
name-subnet-asia
region-asia-east1
IP stack-IPv4
IPV4 range-172.16.0.0/24
pvt-off
flow logs-off
hybrid subnet-off
allow--- ssh,icmp
dynamic routing mode-regional
create

create instance
-----
instance-1
----------
compute engine=vm instance-create
name-us-vm
region-us-east1
networkinterface-us-vpc
subnet-10.0.0.0/24
create

instance-2
------

compute engine=vm instance-create


name-asia-vm
region-asia-east1
network interface-asia-vpc
subnet-172.16.0.0/24
create
go to instance1--ssh-authorize-ping 172.16.0.2 --not happen

then go to
vpc network--vpc network peering--create connection-continue-

name-mypeering
your vpc-network-vpc-usa
in project central-cab-410203
vpc network name-vpc-asia
ipv4--create

name-mypeering2
your vpc-network-vpc-asia
in project central-cab-410203
vpc network name-vpc-us
ipv4--create
then ping it will happen

cloud NAT gateway (l-29)


---------
1st create vpc
vpc network-create-
name-myvpc
custom
subnet-pub-subnet-10.0.1.0/24
add subnet-pvt-subnet-10.0.2.0/24
create

create two instance


---
instance-1
name-pub-vm
region-same as vpc
window server
subnet-pub-subnet
create
instance-2
--
name-pvt-vm
region-same
window server
subnet-pvt-subnet

Primary internal IPv4 address-empharial(automatic)


external IPV4-none-
create

then-vm1-set passwd-set-Bz(IRlvg@mYQ1tr
download the file-
remote desktop-
computer name=pvt ip of 2nd vm
username-rajlaxmidas80
pass -2nd vm passwd
ping-8.8.8.8 not happening
go to vpc network-ip addree-reserve external ip-
name-nat-ip
region-asia-specic
attached to -none-reserve

dashboard-search cloud route-create


name-nat-router
network-myvpc
region-east-asia1
create
search cloud NAT-
namt-nat-gateway
network-my vpc
region-asia-east1
cloud-router-nat-router
cloud nat ip address-manual
ip address 1-select nat ip
create

then check in cmd-ping 8.8.8.8


ping google.com
reply is coming

vm2-set passwd-\+)Ui]:]fES+%fM

load balancer(l-32)
----
create a custom vpc-
name-myvpc
asia-east1
2 subnet
subnet1-10.0.0.0/24
subnet2-172.16.0.0/24
firewall rules--
--
all,icmp,ssh,rdp
create

then go to compute engine-instance group-


create-
name-load
instance template-create a new instance template-
name-
loaction-global
machine type-n1
image-centos
allow http ,https
save and continue

num of instance-2
location-multiple zones-asia-east1
target-even
auto scale-off
min-2
max-2
autohealing-create a health check-
name-new-health
scope-global
protocols-tcp
proxy-none
logs -off
health-5 5
2
2
save
create

come to instance
and open in ssh
instance-1
----

sudo su
yum install httpd -y
service httpd start
service httpd status
cd /var/www/html
vi index.html
this is server1
:wq
copy the public id and paste in google its coming

open instance-2
--
sudo su
yum install httpd -y
service httpd start
service httpd status
cd /var/www/html
vi index.html
this is server2
:wq
copy the public id and paste in google its coming

dashboard-networking-network service-loadbalancing-create
type-application
public facinf-public facing-global
create load balancer-
fronted configuration-
name-front
protocol-http
ipv4-ephmeral-80-done
backend-create a backend service
name-my backend
backend type-instance group
protocol-80 80 30sec
instance group-myload
port-80
capacity-60
create
ok

routing rules-
mode-simple
name-mylb
create
copy the public ip of load balance and paste it on google its happening

auto scaling-l-33
----------

compute engine-instance group-instance group-create-


name-instance group
template-create a new template-
choose window machine
allow http and https
save and continue

IAM
---
dashboard-IAM & identity-IAM
grant access-add [email protected](only gmail account)-basic-owner-
save

service account (l-37)


-----
dashboard-IAM & identity-service account-create-
name-myservice
save and create
grant-select role-compute engine-compute admin-continue
done
create a instance api change and give the myservice name and create
open with ssh
sudo su
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-
accounts/default/token" -H "Metadata-Flavor: Google"
one token created copy it

ya29.c.c0AY_VpZhadhC5CP16PXdinNT8JRJeMEX5JjOtcXFZ6ltSos5IWHAe_6cB_EJGueGo2lBHxDPC1p
C-u3qhKoJ855qyxI8nUURd3woC9yRWWw-jKUsn6wnlEjRofL3v_e-xV3IMh07xfeXlH-
u4u6BgkqOH6g5pQgn6VNutBJggprPRh1xHD-
BX_xy07cM5QnEVPfI_CN0wFrnE1dtu3I9dMOv8YiPQkqzT_O_D47roPeuQjSo46gxhXt6RwvqY5pNLfroLT
J7Jhg132Twhidi3P2GaATtbvmEovexio4XQPM97wXRo-
C6svM9CtgsGG2_B1IA00RL_EHuFGDoNVuDupRBfC0qAG98INN8ylFSsCl2tHU7Lr7m27yBxER0MpwWhtPlr
fAN399PZpbQlX0abWobytRsw3g0eW0RknxxYiVw_Jyl7QfMUbjIFVaypxcnW0Bb50ddcoBo4tRgjed8Oy7v
vcnz27wfjmgMmbul-0B_mQ06k9QmXJt7yp0_oQZ9dQJVxJF3hgSo2qdmY15jeOskI5Z_zhoZeRyw9jpYM-
qFOYq0mUfi48srBfjFmoy8cYMbxMIy3sqMcf11BVOFhVfMag93WqyFurml1VrYj9eJlOJuMZs2Vu14W8tOV
cQwk7Yo2ca-IR5OgBlXJrwOe70jxQ3l77cpSjXnoIFsdls9YBtnXqZ25zU9M_rfwc60lWQjmr-
Fnym9Qx36-JxqrpiFvk86ip3VgyaayzaOJa6vhFqOx2aRppfWoq0U_nRhexFqiqeOUgyn6dWqxd-
Vu1aeOxx0mUsnw6329skn5ZlFQ5c5zxgsa0-r2wmoO8qme4WwcvvXouY-
lu4uy6jOqIyfQ8rk5vW4BBXqa270u2rJsxRSr1ulpmfha3cqslzscyFmod-FIkcY_OYu1-
3466XO5sBwZRgSed1QqdVVWs1jMJVWz_15gB1WoWSxQUmOvrYYS0M5wrqVk0iuktutychW_3zvRmcRj0q4Y
a4ubwhIIn7V9cX30127"

curl https://compute.googleapis.com/compute/v1/projects/central-cab-410203/zones/
us-central1-a/instances -H "Authorization":"Bearer
ya29.c.c0AY_VpZhadhC5CP16PXdinNT8JRJeMEX5JjOtcXFZ6ltSos5IWHAe_6cB_EJGueGo2lBHxDPC1p
C-u3qhKoJ855qyxI8nUURd3woC9yRWWw-jKUsn6wnlEjRofL3v_e-xV3IMh07xfeXlH-
u4u6BgkqOH6g5pQgn6VNutBJggprPRh1xHD-
BX_xy07cM5QnEVPfI_CN0wFrnE1dtu3I9dMOv8YiPQkqzT_O_D47roPeuQjSo46gxhXt6RwvqY5pNLfroLT
J7Jhg132Twhidi3P2GaATtbvmEovexio4XQPM97wXRo-
C6svM9CtgsGG2_B1IA00RL_EHuFGDoNVuDupRBfC0qAG98INN8ylFSsCl2tHU7Lr7m27yBxER0MpwWhtPlr
fAN399PZpbQlX0abWobytRsw3g0eW0RknxxYiVw_Jyl7QfMUbjIFVaypxcnW0Bb50ddcoBo4tRgjed8Oy7v
vcnz27wfjmgMmbul-0B_mQ06k9QmXJt7yp0_oQZ9dQJVxJF3hgSo2qdmY15jeOskI5Z_zhoZeRyw9jpYM-
qFOYq0mUfi48srBfjFmoy8cYMbxMIy3sqMcf11BVOFhVfMag93WqyFurml1VrYj9eJlOJuMZs2Vu14W8tOV
cQwk7Yo2ca-IR5OgBlXJrwOe70jxQ3l77cpSjXnoIFsdls9YBtnXqZ25zU9M_rfwc60lWQjmr-
Fnym9Qx36-JxqrpiFvk86ip3VgyaayzaOJa6vhFqOx2aRppfWoq0U_nRhexFqiqeOUgyn6dWqxd-
Vu1aeOxx0mUsnw6329skn5ZlFQ5c5zxgsa0-r2wmoO8qme4WwcvvXouY-
lu4uy6jOqIyfQ8rk5vW4BBXqa270u2rJsxRSr1ulpmfha3cqslzscyFmod-FIkcY_OYu1-
3466XO5sBwZRgSed1QqdVVWs1jMJVWz_15gB1WoWSxQUmOvrYYS0M5wrqVk0iuktutychW_3zvRmcRj0q4Y
a4ubwhIIn7V9cX30127"

roles
---
create a user
give service account permission
give cloud admin permission
give seraacount user permission
then create instance on oteher user it happen
how to create custom role
---

identity aware proxy


---------------------:-
create a instance
name-pvt-vm
go to network interface -externalipv4-none--create

then vpc network-firewall-create rule


allow ssh-22
create

35.235.240.0/20

app engine
--------
dashboard-serverless-appengine-dashboard-

cloudshell
ls
upload app folder
ls
cd app
node server.js
gcloud app describe
gcloud app create
gcloud app describe

crete another cloudshell

gcloud app deploy -v version1


check 2 buckets created automatically
sudo apt install apache2-utils -y

cloud run
---------
dashboard-serverless-cloudrun-create

open cloudshell

sql
---
dashboard-more option-database-sql-create-mysql

cloud spanner
---------
database-spanner-

memory store redis


----
database-memorystore-redis

You might also like