Posts Tagged ‘case’

How to split large files in Windows via split command line and File Archive GUI tool easily

Tuesday, October 22nd, 2024

Moving around a very large files especially Virtualbox Virtual Machines or other VM formats between Windows host and OneDrive might be a problem due to either Azure Cloud configured limitations, or other reasons that your company Domain Administrator has configured, thus if you have to migrate your old Hardware Laptop PC Windows 10 to a newer faster better Harware / Better Performance Notebook Computer with Windows 11 and you still want to keep and move your old large files in this short and trivial article, will explain how.

The topic is easily and most of novice sysadmins should have already be faced to bump into something like this but anyways i found useful to mention about Git for Windows, as it is really useful too thus wrote this small article.

The moved huge files, in my case an experimental Virtual Machines Images which I needed to somehow migrate on the new Freshly installed Windows laptop, the Large files were 40 / 80 etc. Gigabytes or whatever large amount of files from your PC to the Cloud Onedrive and of course the most straight forward thing i tried was to simply add the file for inclusion into the Onedrive storage (via OneDrive tool setup interface), however this file, failed due to OneDrive Cloud file format security limitations or Antivirus solutions configured to filter out the large file copying or even a prohibition to be able to include any kind of Virtual Machines ISOs straight into the cloud.

With this big files comes the question:

How to copy the Virtual Machines from your Old Hardware Laptop to the Cloud (without being able to use an external SSD Hard Drive or a USB SSD Flash drive, due to Domain policy configured for your windows to be unable to copy to externally connected Drive but only to read from such.) ?
 

Here are few sample approaches to do it both from command line (useful if you have to repeat the process or script it and deploy to multiple hosts) or for single hosts via an Archiver tool:

 

1. Using split command Git for Windows (Bash) MINGW64 shell 

Download Git for Windows – https://git-scm.com/download install it and you will get the MINGW64 bash for Windows executable.

Run it either invoke bash command from command line or trigger Windows Run command prompt (Windows button + R) and type full path to executable
 

C:\Program Files\Git\git-bash.exe


Git-for-Windows-bash-for-windows-MINGW64-windows-11-screenshot

Use the integrated program split and to cut it into pieces use:

 

# split MyVeryLargeFileVM.vdi -b 800m


To split the .VDI virtualbox file to lets say 5 Gigabite pieces:

# split MyVeryLargeFile.vdi -b 5g

The output files will be named pieces will be named as in a normal UNIX / GNU split command in the format and each piece of 5GB will be named like:

xaa
xab
xac
xad

If you want to get a more meaningful name for the spilitted files you can set a generated split file prefix with suffixes to be 5 digits long:

# split MyVeryLargeFile.vdi MyVeryLargeFileVM-parts_ -b 5g -d -a 5

  • the -d flag for using numerical suffixes (instead of default aa, ab, ac, etc…),
  • and the option -a 5 to tell it I want the suffixes to be 5 digits long:

2. Split large files by Archiving them with Winrar (ShareWare) tool

If you have already Winrar installed and you don't want to bother with too much typing from the command line, You can use good old WinRAR as a file splitter/joiner as well.

To split a file into smaller files, select "Store" as the compression method and enter the desired value (bytes) into "Split to volumes" box.
This way you can have split files named as filename.part1.rar, filename.part2.rar, etc.

WinRAR_cut-split-large-files-into-pieces-screenshot-Windows

3. Split files with 7-Zip (FreeWare)

Assuming you have the 7-Zip installed on the PC, you can do the archiving of the Big file to a smaller pieces one, you can create the splitted file from 7Zip interfaces menus:

7zip-file-split-files-into-multiple-pieces-windows-screenshot

Or directly cut the single file into multiple volumes, directly from Windows Explorer by Selecting the file and using fall down menus :
7zip-creation-of-multiple-parts-file-from-single-one-screenshot-Windows

7-zip-split-huge-files-to-lower-parts-set-volume-size

Sum it up what learned ? 

What we learned is how to cut large files into multiple single consequential ones for easy copy between Network sides, via both Git 4 Windows and manual copy paste of parted multiple files to OneDrive / DropBox / pCloud or Google Drive.
There is a plenty of other approaches to take as there is also file GUI tools, besides using GNU Win / Gnu Tools for Windows or Cygwin / Gsplit GUI tool  or some kind of the many Archiver toolsavailable for Windows, another option to split the large files is to use a bunch of PowerShell and Batch scripts written that can help you do the file split for both binaries files or Text files. but i'll stop here as I believe that is pretty much enough for most basic needs.

 

How to Copy / Backup Windows USB drive from one USB to a second

Friday, October 18th, 2024

Did you know that when you copy all the files from a USB Drive you don’t copy all the data?

Did you know that there may be files that are not even visible?

In this tutorial you will discover how to copy all of your USB Drive sector by sector, that is to say, that you will see how to create a copy identical to your USB drive without missing anything!

This can be useful if you have formatted your USB stick in error and want to use it, you can create an image for the USB Drive on your computer and then you can recover the formatted data in the image afterward!

The software used in this tutorial is called ImageUSB, it is free, portable, and easy to use.

Don’t use this method if you want only to copy some files, use this to clone/backup your USB Drive with all its master boot record, partition tables, and data.

Let’s go!

Clone Your USB Drive with ImageUSB on Windows 10

Start by downloading and extracting ImageUSB from this official URL: https://www.osforensics.com/tools/write-usb-images.html

Double-click on  imageUSB.exe .

Select your USB Drive from the list, select “Create image from USB drive“. Choose the location for the binary image file (.bin) that will be created from the USB drive.

Click on “Create“.Click “Yes” to confirm your choices.

imageusb clone usb flash drive backup restore 3 create image

Click “Yes” to overwrite the bin file in case it’s already there.

Wait for a couple of minutes…

After the image is created you should see this message. Click “OK“.

Now if you want to restore an image to your USB Drive, just select your USB Drive and choose “Write image to USB drive“. Choose your bin image and click on “Write“.

imageusb clone usb flash drive backup restore 7 write

This program is not recommended on different sizes USB Drives…
Use it mostly for backup/restore on the same USB Drive for your bootable software.

There you have it, the copy of USB to second USB completed !

Enjoy ! 

 

 

Haproxy Enable / Disable Application backend server configured to roundrobin in emergency case via haproxy socket command

Thursday, May 2nd, 2024

haproxy-stats-socket

Haproxy LB backend BACKEND_ROUNDROBIN are configured to roundrobin with check health check port  (check port 33333).
For example letsa say haproxy server is running with a haproxy_roundrobin.cfg like this one.

Under some circumstances however if check port TCP 33333 is UP, but behind 1 or more of Application that is providing the resources to customers misbehaves ,
(app-server1, app-server2, app-server3, app-server4) members , Load Balancer cannot know this, because traffic routing decision is made based on Echo port.

One example scenario when this can happen is if Application server has issue with connectivity towards Database hosts:
(db-host1, db-host2, db-host3, db-host4)

If this happens 25% of traffic might still get balanced to broken Application server. If such scenario happens during OnCall and this is identified as problem,
work around would be to temporary disable the misbehaving App servers member from the 4 configured roundrobin pairs in haproxyproduction.cfg :

For example if app-server3 App node is identified as failing and 25% via LB is lost, to resolve it until broken Application server node is fixed, you will have to temporary exclude it from the ring of roundrobin backend hosts.

1.  Check the status of haproxy backends

echo "show stat" | socat stdio /var/lib/haproxy/stats

As you can see the backend is disabled.

Another way to do it which will make your sessions to the server not directly cut but kept for some time is to put the server you want to exclude from haproxy roundrobin to "maintenace mode".

echo "set server bk_BACKEND_ROUNDROBIN/app-server3 state maint" | socat unix-connect:/var/lib/haproxy/stats stdio

Actually, there is even better and more advanced way to disable backend from a configured rounrobin pair of hosts, with putting the available connections in a long waiting queue in the proxy, and if the App host is inavailable for not too short, haproxy will just ask the remote client to keep the connection for longer and continue the session interaction to remote side and wait for the App server connectivity to go out of maintenance, this is done via "drain" option.

echo "set server bk_BACKEND_ROUNDROBIN/app-server3 state drain" | socat unix-connect:/var/lib/haproxy/stats stdio

 

  • This sets the backend in DRAIN mode. No new connections are accepted and existing connections are drained.

To get a better idea on what is drain state, here is excerpt from haproxy official documentation:

Force a server's administrative state to a new state. This can be useful to
disable load balancing and/or any traffic to a server. Setting the state to
"ready" puts the server in normal mode, and the command is the equivalent of
the "enable server" command. Setting the state to "maint" disables any traffic
to the server as well as any health checks. This is the equivalent of the
"disable server" command. Setting the mode to "drain" only removes the server
from load balancing but still allows it to be checked and to accept new
persistent connections. Changes are propagated to tracking servers if any.


2. Disable backend app-server3 from rounrobin 


 

echo "disable server BACKEND_ROUNDROBIN/app-server3" | socat unix-connect:/var/lib/haproxy/stats stdio

# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
stats,FRONTEND,,,0,0,3000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,0,,,,0,0,0,0,0,0,,0,0,0,,,0,0,0,0,,,,,,,,
stats,BACKEND,0,0,0,0,300,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,282917,0,,1,2,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
Frontend_Name,FRONTEND,,,0,0,3000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,3,0,,,,0,0,0,0,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,,
Backend_Name,app-server4,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,1,0,282917,0,,1,4,1,,0,,2,0,,0,L4OK,,12,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
Backend_Name,app-server3,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT,1,0,1,1,2,2,23,,1,4,2,,0,,2,0,,0,L4OK,,11,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
Backend_Name,BACKEND,0,0,0,0,300,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,282917,0,,1,4,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,

Once it is confirmed from Application supprt colleagues, that machine is out of maintenance node and working properly again to reenable it:

3. Enable backend app-server3

echo "enable server bk_BACKEND_ROUNDROBIN/app-server3" | socat unix-connect:/var/lib/haproxy/stats stdio

4. Check backend situation again

echo "show stat" | socat stdio /var/lib/haproxy/stats
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
stats,FRONTEND,,,0,0,3000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,0,,,,0,0,0,0,0,0,,0,0,0,,,0,0,0,0,,,,,,,,
stats,BACKEND,0,0,0,0,300,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,282955,0,,1,2,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,
Frontend_Name,FRONTEND,,,0,0,3000,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,3,0,,,,0,0,0,0,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,,
Backend_Name,app-server4,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,1,0,282955,0,,1,4,1,,0,,2,0,,0,L4OK,,12,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
Backend_Name,app-server3,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,0,1,1,2,3,58,,1,4,2,,0,,2,0,,0,L4OK,,11,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0,
Backend_Name,BACKEND,0,0,0,0,300,0,0,0,0,0,,0,0,0,0,UP,1,1,1,,0,282955,0,,1,4,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0,


You should see the backend enabled again.

NOTE:
If you happen to get some "permission denied" errors when you try to send haproxy commands via the configured haproxy status this might be related to the fact you have enabled the socket in read only mode, if that is so it means the haproxy cannot be written to and therefore you can only read info from it with status commands, but not send any write operations to haproxy via unix socket.

One example haproxy configuration that enables haproxy socket in read only looks like this in haproxy.cfg:
 

 stats socket /var/lib/haproxy/stats


To make the haproxy socket read / write mode, for root superuser and some other users belonging to admin group 'adm', you should set the haproxy.cfg to something like:

stats socket /var/lib/haproxy/stats-qa mode 0660 group adm level admin

or if no special users with a set admin group needed to have access to socket, use instead config like:

stats socket /var/lib/haproxy/stats-qa.sock mode 0600 level admin

Must have software on freshly installed windows – Essential Software after fresh Windows install

Friday, March 18th, 2016

Install-update-multiple-programs-applications-at-once-using-ninite

If you're into IT industry even if you don't like installing frequently Windows or you're completely Linux / BSD user, you will certainly have a lot of friends which will want help from you to re-install or fix their Windows 7 / 8 / 10 OS. At least this is the case with me every year, I'm kinda of obliged to install fresh windowses on new bought friends or relatives notebooks / desktop PCs.

Of course according to for whom the new Windows OS installed the preferrences of necessery software varies, however more or less there is sort of standard list of Windows Software which is used daily by most of Avarage Computer user, such as:
 

Not to forget a good candidate from the list to install on new fresh windows Installation candidates are:

  • Winrar
  • PeaZIP
  • WinZip
  • GreenShot (to be able to easily screenshot stuff and save pictures locally and to the cloud)
  • AnyDesk (non free but very functional alternative to TeamViewer) to be able to remotely access remote PC
  • TightVNC
  • ITunes / Spotify (for people who have also iPhone smart phone)
  • DropBox or pCloud (to have some extra cloud free space)
  • FBReader (for those reading a lot of books in different formats)
  • Rufus – Rufus is an efficient and lightweight tool to create bootable USB drives. It helps you to create BIOS or UEFI bootable devices. It helps you to create Windows TO Go drives. It provides support for various disk, format, and partition.
  • Recuva is a data recovery software for Windows 10 (non free)
  • EaseUS (for specific backup / restore data purposes but unfortunately (non free)
  • For designers
  • Adobe Photoshop
  • Adobe Illustrator
  • f.lux –  to control brightness of screen and potentially Save your eyes
  • ImDisk virtual Disk Driver
  • KeePass / PasswordSafe – to Securely store your passwords
  • Putty / MobaXterm / SecureCRT / mPutty (for system administrators and programmers that has to deal with Linux / UNIX)

I tend to install on New Windows installs and thus I have more or less systematized the process.

I try to usually stick to free software where possible for each of the above categories as a Free Software enthusiast and luckily nowadays there is a lot of non-priprietary or at least free as in beer software available out there.

For Windows sysadmins or College and other public institutions networks including multiple of Windows Computers which are not inside a domain and also for people in computer repair shops where daily dozens of windows pre-installs or a set of software Automatic updates are  necessery make sure to take a look at Ninite

ninite-automate-windows-program-deploy-and-update-on-new-windows-os-openoffice-screenshot

As official website introduces Ninite:

Ninite – Install and Update All Your Programs at Once

Of course as Ninite is used by organizations as NASA, Harvard Medical School etc. it is likely the tool might reports your installed list of Windows software and various other Win PC statistical data to Ninite developers and most likely NSA, but this probably doesn't much matter as this is probably by the moment you choose to have installed a Windows OS on your PC.

ninite-choises-to-build-an-install-package-with-useful-essential-windows-software-screenshot
 

For Windows System Administrators managing small and middle sized network PCs that are not inside a Domain Controller, Ninite could definitely save hours and at cases even days of boring install and maintainance work. HP Enterprise or HP Inc. Employees or ex-employees would definitely love Ninite, because what Ninite does is pretty much like the well known HP Internal Tool PC COE.

Ninite could also prepare an installer containing multiple applications based on the choice on Ninite's website, so that's also a great thing especially if you need to deploy a different type of Users PCs (Scientific / Gamers / Working etc.)

Perhaps there are also other useful things to install on a new fresh Windows installations, if you're using something I'm missing let me know in comments.

Fix “There Has Been a Critical Error on Your Website” wordpress error

Friday, December 2nd, 2022

there-has-been-a-critical-error-on-your-website-wordpress-critical-error-fix

Say you have a shiny working WordPress based website withtout any monitoring set for years but suddenly, you open the site and you get the terrifying error:
 

There Has Been a Critical Error on Your Website

That is quite of a stress for sure. As in the first few minutes you don't understand how this has happened since, you did not touched the perfeclty working site for a very, very long time.
Then you start to debug into the apache / nginx access.log, error.log and mysql mysql.err etc. franticly trying to figure it out the normal ideas pop-up immediately into mind, whether you have a recent backup for the website's database. If you have pair of high availability webservers service or backup databases that serve the traffic via a separate standby instance of the service, you might try to switch off the official service and see whether the standby Webserver / SQL server instance would serve the website fine.
However, if this is not an option and you have no standby backup service as a recovery Plan B option already set. Your only option is to continue to debug what is wrong.
Then the next thing to do is to check whether you don't have a Web Caching or Proxy in front of your webservers that are preventing you to see a recent version of the website and give you some old cache or you don't have an ISP proxy that is giving you some unreal results. That is easily seenable from the Webserver logs. If this is neither the case the next thing is to:
 

Enable WordPress (wp-config.php) Debug mode

By default for Security reasons the WordPress PHP execution debug mode is switched off inside wp-config.php.
When there are odd pages with the WordPress based blog or site however this can easily be changed by modifying the WP_DEBUG true|false value.

To do so edit with a text editor such as vim / nano / mcedit  wp-config.php or if no SSH access to the remote machine, use SFTP / FTP transfer protocol copy the file to your desktop and inspect it and make sure the WP_DEBUG / WP_DEBUG_DISPLAY / WP_DEBUG_LOG has following values:

define( 'WP_DEBUG', true );

define( 'WP_DEBUG_DISPLAY', false );

define( 'WP_DEBUG_LOG', true );

Reloading the Browser window tab with There is a critical error on Your website, you should get some Errors or Warnings like:
 

Warning: Illegal string offset 'parent_slug' in /var/www/websitecom/wp-content/plugins/photo-gallery/booster/main.php on line 180

Warning: Illegal string offset 'slug' in /var/www/websitecom/wp-content/plugins/photo-gallery/booster/main.php on line 180

 

Then you can temporary disable the problematic problem in that case for example the photo-gallery and recheck the website, and then restore from backup snapshot the respective plugin files version from a moment, when the website was working.

If this doesn't solve it and more plugins are crashing and you can't find an easy way to work-around it you miss a backup, you might try to

 

Disable all WordPress active plugins

Disable your plugins from the dashboard, visit Plugins > Installed Plugins and tick the checkbox at the top of the list to select them all.
Then click Bulk Actions -> Deactivate, which should be enough to disable any conflicts and restore your site.

You can do essentially the same thing through SSH / FTP session.

Step 1: Log in to your site with SSH / FTP.
Step 2: Open the wp-content folder to find your plugins.
Step 3: Rename the plugins folder to plugins_old and verify that your site is working again via SSH run commands:

# cd  path_to/plugins; mv plugins plugins_old

or rename via FTP client
Step 4: Rename the folder back to “plugins”. The plugins should be disabled still, so you should be able to log in to your dashboard and activate them one by one. If
the plugins reactivate automatically, rename individual plugin folders with _old until your site is restored.

Raise the PHP Memory Limit

Sometimes, a low PHP limitation causes critical errors on WP based blogs and sites, if necessery raise up the memory limitation via:

define( 'WP_MEMORY_LIMIT', '128M' );

Change Max Upload File Size and Text Processing function limits

To increase the max upload file size, add this code to wp-config.php:

ini_set('upload_max_size' , '256M' );

ini_set('post_max_size','256M');

And to fix the breaking of large pages on your site, add this code:

ini_set('pcre.recursion_limit',20000000);
ini_set('pcre.backtrack_limit',10000000);

Clear up any caches

If you use some session caching of the website on the machine such as memcached / ncache / redis / varnish or an haproxy or any proxy in front of the webserver to do some kind of High availability could produce strange  unexpected Critical errors on Your Website, thus restarting such services or cleaning up any cache would be advisable if you have such.
 

What Causes "There Has Been a Critical Error on Your Website" error?


The reason could be practically anything as WP is a kind of multi-comonent free and a bit of bloatware. The general ones could be  from a missing database table / table fields to a messed up plugin after update a disappeared critical plugin or essential wordpress PHP file, but in my specific case the reason was simple the Plugins Auto-update, which I have had the stupidity to enable.

The WordPress Automatic Updates, though saving you effort and Protecting your website in most cases against recent bugs and Exploits and increasing the WP security level, often causes issues and from my personal experience it is not recommended so better avoid it. Again next time you implement any automation to your server make sure you put some kind of monitoring.

Even if you decide to enable it make sure you do it the right way and not like me, by enabling some Monitoring to the WordPress site via Zabbix / Nagios / Cacti / monit  etc to be sure you get notified immediately if the WordPress based site is down.

Zabbix rkhunter monitoring check if rootkits trojans and viruses or suspicious OS activities are detected

Wednesday, December 8th, 2021

monitor-rkhunter-with-zabbix-zabbix-rkhunter-check-logo

If you're using rkhunter to monitor for malicious activities, a binary changes, rootkits, viruses, malware, suspicious stuff and other famous security breach possible or actual issues, perhaps you have configured your machines to report to some Email.
But what if you want to have a scheduled rkhunter running on the machine and you don't want to count too much on email alerting (especially because email alerting) makes possible for emails to be tracked by sysadmin pretty late?

We have been in those situation and in this case me and my dear colleague Georgi Stoyanov developed a small rkhunter Zabbix userparameter check to track and Alert if any traces of "Warning"''s are mateched in the traditional rkhunter log file /var/log/rkhunter/rkhunter.log

To set it up and use it is pretty use you will need to have a recent version of zabbix-agent installed on the machine and connected to a Zabbix server, in my case this is:

[root@centos ~]# rpm -qa |grep -i zabbix-agent
zabbix-agent-4.0.7-1.el7.x86_64

 placed inside /etc/zabbix/zabbix_agentd.d/userparameter_rkhunter_warning_check.conf

[root@centos /etc/zabbix/zabbix_agentd.d ]# cat userparameter_rkhunter_warning_check.conf
# userparameter script to check if any Warning is inside /var/log/rkhunter/rkhunter.log and if found to trigger Zabbix alert
UserParameter=rkhunter.warning, (TODAY=$(date |awk '{ print $1" "$2" "$3 }'); if [ $(cat /var/log/rkhunter/rkhunter.log | awk “/$TODAY/,EOF” | /bin/grep -i ‘\[ Warning \]’ | /usr/bin/wc -l) != ‘0’ ]; then echo 1; else echo 0; fi)
UserParameter=rkhunter.suspected,(/bin/grep -i 'Suspect files: ' /var/log/rkhunter/rkhunter.log|tail -n 1| awk '{ print $4 }')
UserParameter=rkhunter.rootkits,(/bin/grep -i 'Possible rootkits: ' /var/log/rkhunter/rkhunter.log|tail -n 1| awk '{ print $4 }')


2. Prepare Rkhunter Template, Triggers and Items


In Zabbix Server that you access from web control interface, you will have to prepare a new template called lets say Rkhunter with the necessery Triggers and Items


2.1 Create Rkhunter Items
 

On Zabbix Server side, uou will have to configure 3 Items for the 3 configured userparameter above script keys, like so:

rkhunter-items-zabbix-screenshot.png

  • rkhunter.suspected Item configuration


rkhunter-suspected-files

  • rkhunter.warning Zabbix Item config

rkhunter-warning-found-check-zabbix

  • rkhunter.rootkits Zabbix Item config

     

     

    rkhunter-zabbix-possible-rootkits-item1

    2.2 Create Triggers
     

You need to have an overall of 3 triggers like in below shot:

zabbix-rkhunter-all-triggers-screenshot
 

  • rkhunter.rootkits Trigger config

rkhunter-rootkits-trigger-zabbix1

  • rkhunter.suspected Trigger cfg

rkhunter-suspected-trigger-zabbix1

  • rkhunter warning Trigger cfg

rkhunter-warning-trigger1

3. Reload zabbix-agent and test the keys


It is necessery to reload zabbix agent for the new userparameter to start to be sent to remote zabbix server (through a proxy if you have one configured).

[root@centos ~]# systemctl restart zabbix-agent


To make the zabbix-agent send the keys to the server you can use zabbix_sender to have the test tool you will have to have installed (zabbix-sender) on the server.

To trigger a manualTest if you happen to have some problems with the key which shouldn''t be the case you can sent a value to the respectve key with below command:

[root@centos ~ ]# zabbix_sender -vv -c "/etc/zabbix/zabbix_agentd.conf" -k "khunter.warning" -o "1"


Check on Zabbix Server the sent value is received, for any oddities as usual check what is inside  /var/log/zabbix/zabbix_agentd.log for any errors or warnings.

ASCII Art studio – A powerful ASCII art editor for Windows / Playscii a cool looking text editor for Linux

Monday, June 28th, 2021

This post is just informative for Text Geeks who are in love with ASCII Art, it is a bit of rant as I will say nothing new, but I thought it might be of interest to some console maniac out there 🙂

ascii art studio aas program windows xp professional drawing program screenshot

While checking stuff on Internet I've stumbled on interesting ASCII arts freak software – >ASCII Art Studio. ASCII Art Studio is unfortunately needs licensing is not Free Software. But anyways, for anyone willing to draw pro ASCII art pictures it is a must see. Check it out;

Isn't it like a Plain Text pro Photoshop ? 🙂 Its a pity we don't have a Linux / BSD Release of this wonderful piece of software. I've tried with WINE (Windows Emulator) on Linux to make the Ascii Art Studio work but that was a fail. It seems only way to make it work is have Windows as a worst case install a Virtual Machine with VirtualBox / Vmware and run it inside if you don't have a Windows PC at hand.

Of course there are stuff on Linux to ascii art edit you can use if you want to have a native software to edit ASCIIs such as Playscii. Unfortunately Playscii is not an easy one to install and the software doesn't have a prepared rpm or deb binary you can easily roll on the OS and you have to manually build all required python modules and have a working version of python3 to be able to make it work.

I did not have much time to test to install it and since I faced issues with plascii install I just abandoned it. If some geek has some more time anyways I guess it is worse to give it a try below is 2 screenshots from PLAYSCII official download page. 

playscii_shot1-official.

As you see authors of the open source playscii whose source is available via github choose to have an amazing looking ascii art text menus, though for daily ASCII art editing it is perhaps much more complicated to use than the simlistic ASCII Art Studio

playscii_shot2-official

There is other stuff for Linux to do ASCII Art files text edit like:
JaVE (this one I don't personally like because it is Java Based),  Ascii Art Maker or Pablow Draw Linux (unfortunately this 2 ones are proprietary).

Postfix copy every email to a central mailbox (send a copy of every mail sent via mail server to a given email)

Wednesday, October 28th, 2020

Postfix-logo-always-bcc-email-option-send-all-emails-to-a-single-address-with-postfix.svg

Say you need to do a mail server migration, where you have a local configured Postfix on a number of Linux hosts named:

Linux-host1
Linux-host2
Linux-host3

etc.


all configured to send email via old Email send host (MailServerHostOld.com) in each linux box's postfix configuration's /etc/postfix/main.cf.
Now due to some infrastructure change in the topology of network or anything else, you need to relay Mails sent via another asumably properly configured Linux host relay (MailServerNewHost.com).

Usually such a migrations has always a risk that some of the old sent emails originating from local running scripts on Linux-host1, Linux-Host2 … or some application or anything else set to send via them might not properly deliver emails to some external Internet based Mailboxes via the new relayhost MailServerNewHost.com.

E.g. in /etc/postfix/main.cf Linux-Host* machines, you have below config after the migration:

relayhost = [MailServerNewHost.com]

Lets say that you want to make sure, that you don't end up with lost emails as you can't be sure whether the new email server will deliver correctly to the old repicient emails. What to do then?

To make sure will not end up in undelivered state and get lost forever after a week or so (depending on the mail queue configuration retention period made on Linux sent MTAs and mailrelay MailServerNewHost.com, it is a very good approach to temprorary set all email communication that will be sent via MailServerNewHost.com a BCC emaills (A Blind Carbon Copy) of each sent mail via relay that is set on your local configured Postfix-es on Linux-Host*.

In postfix to achieve that it is very easy all you have to do is set on your MailServerNewHost.com a postfix config variable always_bcc smartly included by postfix Mail Transfer Agent developers for cases exactly like this.

To forward all passed emails via the mail server just place in the end of /etc/postfix/mail.conf after login via ssh on MailServerNewHost.com

[email protected]


Now all left is to reload the postfix to force the new configuration to get loaded on systemd based hosts as it is usually today do:

# systemctl reload postfix


Finally to make sure all works as expected and mail is sent do from do a testing via local MTAs. 
 

Linux-Host:~# echo -e "Testing body" | mail -s "testing subject" -r "[email protected]" [email protected]

Linux-Host:~# echo -e "Testing body" | mail -s "testing subject" -r "[email protected]" [email protected]


As you can see I'm using the -r to simulate a sender address, this is a feature of mailx and is not available on older Linux Os hosts that are bundled with mail only command.
Now go to and open the [email protected] in Outlook (if it is M$ Office 365 MX Shared mailbox), Thunderbird or whatever email fetching software that supports POP3 or IMAP (in case if you configured the common all email mailbox to be on some other Postfix / Sendmail / Qmail MTA). and check whether you started receiving a lot of emails 🙂

That's all folks enjoy ! 🙂

How to check Microsoft IIS webserver version

Monday, July 21st, 2014

If you have to tune some weirdly behaviour Microsoft IIS (Internet Information Services) webserver, the first thing to do is to collect information about the system you're dealing with – get version of installed Windows and check what kind of IIS version is running on the Windows server?

To get the version of installed Windows on the system you just logged in, the quickest way I use is:
 

Start -> My Computer (right mouse button) Properties

check-windows-server-version-screenshot-windows-2003-r2

Run regedit from cmd.exe and go and check value of registry value:

 

HKEY_LOCAL_MACHINE\SOFTWARE\MicrosoftInetStp\VersionString


check-iis-webserver-version-with-windows-registry-screenshot

As you can see in screenshot in this particular case it is IIS version 6.0.

An alternative way to check the IIS version in some cases (if IIS version return is not disabled) is to telnet to webserver:

telnet your-webserver 80
 


Once connected Send:

HEAD / HTTP/1.0


Also on some Windows versions it is possible to check IIS webserver version from Internet Information Services Management Cosnole:

To check IIS version from IIS Manager:

Start (button) -> Control Panel -> Administrative Tools -> "Internet Information Services" IIS Manager

From IIS Manager go to:

Help -> About Microsoft Management Console


Here is a list with most common IIS version output you will get depending on the version of Windows server:

 

Windows NT 3.51 1.0
Windows NT 4 2.0-4.0
Windows Server 2000 5.0
Windows XP Professional 5.1
Windows Server 2003 6.0
Windows Vista 7.0
Windows Server 2008 7.0
Windows Server 2008 R2 7.5
Windows 7 7.5
Windows Server 2012 8.0
Windows 8 8.0
Windows Server 2012 R2 8.5
Windows 8.1 8.5

If you have only an upload FTP access to a Folder served by IIS Webserver – i.e. no access to the Win server running IIS, you can also grasp the IIS version with following .ASP code:
 

<%
response.write(Request.ServerVariables("SERVER_SOFTWARE"))
%>


Save the file as anyfile.asp somewhere in IIS docroot and invoke it in browser.

Report haproxy node switch script useful for Zabbix or other monitoring

Tuesday, June 9th, 2020

zabbix-monitoring-logo
For those who administer corosync clustered haproxy and needs to build monitoring in case if the main configured Haproxy node in the cluster is changed, I've developed a small script to be integrated with zabbix-agent installed to report to a central zabbix server via a zabbix proxy.
The script  is very simple it assumed DC1 variable is the default used haproxy node and DC2 and DC3 are 2 backup nodes. The script is made to use crm_mon which is not installed by default on each server by default so if you'll be using it you'll have to install it first, but anyways the script can easily be adapted to use pcs cmd instead.

Below is the bash shell script:

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }'); do ((f++)); DC[$f]="$i"; done; \
DC=$(sudo /usr/sbin/crm_mon -n -1 | grep 'Current DC' | awk '{ print $1 " " $2 " " $3}' | awk '{ print $3 }'); \
if [ “$DC” == “${DC[1]}” ]; then echo “1 Default DC Switched to ${DC[1]}”; elif [ “$DC” == “${DC[2]}” ]; then \
echo "2 Default DC Switched to ${DC[2]}”; elif [ “$DC” == “${DC[3]}” ]; then echo “3 Default DC: ${DC[3]}"; fi


To configure it with zabbix monitoring it can be configured via UserParameterScript.

The way I configured  it in Zabbix is as so:


1. Create the userpameter_active_node.conf

Below script is 3 nodes Haproxy cluster

# cat > /etc/zabbix/zabbix_agentd.d/userparameter_active_node.conf

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }'); do ((f++)); DC[$f]="$i"; done; \
DC=$(sudo /usr/sbin/crm_mon -n -1 | grep 'Current DC' | awk '{ print $1 " " $2 " " $3}' | awk '{ print $3 }'); \
if [ “$DC” == “${DC[1]}” ]; then echo “1 Default DC Switched to ${DC[1]}”; elif [ “$DC” == “${DC[2]}” ]; then \
echo "2 Default DC Switched to ${DC[2]}”; elif [ “$DC” == “${DC[3]}” ]; then echo “3 Default DC: ${DC[3]}"; fi

Once pasted to save the file press CTRL + D


The version of the script with 2 nodes slightly improved is like so:
 

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }' | sed -e 's#:##g'); do DC_ARRAY[$f]=”$i”; ((f++)); done; GET_CURR_DC=$(sudo /usr/sbin/crm_mon -n -1 | grep ‘Current DC’ | awk ‘{ print $1 ” ” $2 ” ” $3}’ | awk ‘{ print $3 }’); if [ “$GET_CURR_DC” == “${DC_ARRAY[0]}” ]; then echo “1 Default DC ${DC_ARRAY[0]}”; fi; if [ “$GET_CURR_DC” == “${DC_ARRAY[1]}” ]; then echo “2 Default Current DC Switched to ${DC_ARRAY[1]} Please check “; fi; if [ -z “$GET_CURR_DC” ] || [ -z “$DC_ARRAY[1]” ]; then printf "Error something might be wrong with HAProxy Cluster on  $HOSTNAME "; fi;


The haproxy_active_DC_zabbix.sh script with a bit of more comments as explanations is available here 
2. Configure access for /usr/sbin/crm_mon for zabbix user in sudoers

 

# vim /etc/sudoers

zabbix          ALL=NOPASSWD: /usr/sbin/crm_mon


3. Configure in Zabbix for active.dc key Trigger and Item

active-node-switch1