linux

Put A TimeStamp on Bash History

Sometimes, it is very helpful to have a timestamp on bash history, that way it’s easier to know the exact time a command was executed.

To put a timestamp on history, run the following command;

HISTTIMEFORMAT="%d/%m/%y %T "

That’s all. Next time you run the history command, the history will display with timestamp.

Hope someone finds it useful.

Kerberizing RHEL Server

Notes from Plone…

yum install krb5-workstation pam_krb5 -y
# if krb5.conf is present we should get a fresh copy
mv /etc/krb5.conf /etc/krb5.conf.bak
yum reinstall krb5-libs -y
sed -ie 's/example.com/uconn.edu/g' /etc/krb5.conf
sed -ie 's/EXAMPLE.COM/UCONN.EDU/g' /etc/krb5.conf
fqdn=`hostname --fqdn`;
echo "
ank -randkey host/$fqdn@UCONN.EDU
ktadd -k /etc/krb5.keytab host/$fqdn@UCONN.EDU
";

--- OR ---

kadmin netid/admin@UCONN.EDU
addprinc -randkey host/$fqdn
ktadd -k /etc/krb5.keytab host/$fqdn
modprinc -requires_preauth host/$fqdn
kadmin -p netid/admin@UCONN.EDU
exit
authconfig --enablekrb5 --updateall
echo "netid/admin@UCONN.EDU" >> ~/.k5login
restorecon ~/.k5login
chmod 600 .k5login
service sshd restart

systemd commands, hints and cheatsheet

List all running services

# systemctl

Start/stop or enable/disable services

Activates a service immediately:

# systemctl start foo.service

Deactivates a service immediately:

# systemctl stop foo.service

Restarts a service:

# systemctl restart foo.service

Shows status of a service including whether it is running or not:

# systemctl status foo.service

Enables a service to be started on bootup:

# systemctl enable foo.service

Disables a service to not start during bootup:

# systemctl disable foo.service

Check whether a service is already enabled or not:

# systemctl is-enabled foo.service; echo $?

0 indicates that it is enabled. 1 indicates that it is disabled

How do I change the runlevel?

systemd has the concept of targets which is a more flexible replacement for runlevels in sysvinit.

Run level 3 is emulated by multi-user.target. Run level 5 is emulated by graphical.target. runlevel3.target is a symbolic link to multi-user.target and runlevel5.target is a symbolic link to graphical.target.

You can switch to ‘runlevel 3′ by running

# systemctl isolate multi-user.target (or) systemctl isolate runlevel3.target

You can switch to ‘runlevel 5′ by running

# systemctl isolate graphical.target (or) systemctl isolate runlevel5.target

How do I change the default runlevel?

systemd uses symlinks to point to the default runlevel. You have to delete the existing symlink first before creating a new one

# rm /etc/systemd/system/default.target

Switch to runlevel 3 by default

# ln -sf /lib/systemd/system/multi-user.target /etc/systemd/system/default.target

Switch to runlevel 5 by default

# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target

systemd does not use /etc/inittab file.

List the current run level

runlevel command still works with systemd. You can continue using that however runlevels is a legacy concept in systemd and is emulated via ‘targets’ and multiple targets can be active at the same time. So the equivalent in systemd terms is

# systemctl list-units --type=target

Powering off the machine

You can use

# poweroff

Some more possibilities are: halt -p, init 0, shutdown -P now

Note that halt used to work the same as poweroff in previous Fedora releases, but systemd distinguishes between the two, so halt without parameters now does exactly what it says – it merely stops the system without turning it off.

 

Service vs. systemd

# service NetworkManager stop

(or)

# systemctl stop NetworkManager.service

Chkconfig vs. systemd

# chkconfig NetworkManager off

(or)

# systemctl disable NetworkManager.service

Readahead

systemd has a built-in readahead implementation is not enabled on upgrades. It should improve bootup speed but your mileage may vary depending on your hardware. To enable readahead:

# systemctl enable systemd-readahead-collect.service
# systemctl enable systemd-readahead-replay.service

SystemD cheatsheet

service foobar start systemctl start foobar.service Used to start a service (not reboot persistent)
service foobar stop systemctl stop foobar.service Used to stop a service (not reboot persistent)
service foobar restart systemctl restart foobar.service Used to stop and then start a service
service foobar reload systemctl reload foobar.service When supported, reloads the config file without interrupting pending operations.
service foobar condrestart systemctl condrestart foobar.service Restarts if the service is already running.
service foobar status systemctl status foobar.service Tells whether a service is currently running.
ls /etc/rc.d/init.d/ ls /lib/systemd/system/*.service /etc/systemd/system/*.service Used to list the services that can be started or stopped
chkconfig foobar on systemctl enable foobar.service Turn the service on, for start at next boot, or other trigger.
chkconfig foobar off systemctl disable foobar.service Turn the service off for the next reboot, or any other trigger.
chkconfig foobar systemctl is-enabled foobar.service Used to check whether a service is configured to start or not in the current environment.
chkconfig foobar –list ls /etc/systemd/system/*.wants/foobar.service Used to list what levels this service is configured on or off
chkconfig foobar –add Not needed, no equivalent.

References

fedoraproject.org/wiki/Systemd
Linux readahead: less tricks for more
fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet

Rhel6, Rhel7 Comparison

Moving from Redhat 6 to Redhat 7.  There are a *lot* of differences to get use to.  It is like having a friend come over and rearrange your entire house, including all the closets and cupboards!! You know it is your house, you just can’t seem to find any of your stuff!

Features RHEL 7 RHEL 6
Default File System XFS EXT4
Kernel Version 3.10.x-x kernel 2.6.x-x Kernel
Kernel Code Name Maipo Santiago
General Availability Date of First Major Release 2014-06-09 (Kernel Version 3.10.0-123) 2010-11-09 (Kernel Version 2.6.32-71)
First Process systemd (process ID 1) init (process ID 1)
Runlevel runlevels are called as “targets” as shown below:runlevel0.target -> poweroff.target

runlevel1.target -> rescue.target

runlevel2.target -> multi-user.target

runlevel3.target -> multi-user.target

runlevel4.target -> multi-user.target

runlevel5.target -> graphical.target

runlevel6.target -> reboot.target

/etc/systemd/system/default.target (this by default is linked to the multi-user target)

Traditional runlevels defined :runlevel 0

runlevel 1

runlevel 2

runlevel 3

runlevel 4

runlevel 5

runlevel 6

and the default runlevel would be defined in /etc/inittab file.

/etc/inittab

Host Name Change with the move to systemd, the hostname variable is defined in /etc/hostname. In Red Hat Enterprise Linux 6, the hostname variable was defined in the /etc/sysconfig/network configuration file.
Change In UID Allocation By default any new users created would get UIDs assigned starting from 1000.This could be changed in /etc/login.defs if required. Default UID assigned to users would start from 500.
This could be changed in /etc/login.defs if required.
Max Supported File Size Maximum (individual) file size = 500TBMaximum filesystem size = 500TB(This maximum file size is only on 64-bit machines. Red Hat Enterprise Linux does not support XFS on 32-bit machines.) Maximum (individual) file size = 16TBMaximum filesystem size = 16TB(This maximum file size is based on a 64-bit machine. On a 32-bit machine, the maximum files size is 8TB.)
File System Check “xfs_repair”XFS does not run a file system check at boot time. “e2fsck”File system check would gets executed at boot time.
Differences Between xfs_repair & e2fsck “xfs_repair”- Inode and inode blockmap (addressing) checks.- Inode allocation map checks.

– Inode size checks.

– Directory checks.

– Pathname checks.

– Link count checks.

– Freemap checks.

– Super block checks.

“e2fsck”- Inode, block, and size checks.- Directory structure checks.

– Directory connectivity checks.

– Reference count checks.

– Group summary info checks.

Difference Between xfs_growfs & resize2fs “xfs_growfs”xfs_growfs takes mount point as arguments. “resize2fs”resize2fs takes logical volume name as arguments.
Change In File System Structure /bin, /sbin, /lib, and /lib64 are now nested under /usr. /bin, /sbin, /lib, and /lib64 are usually under /
Boot Loader GRUB 2Supports GPT, additional firmware types, including BIOS, EFI and OpenFirmwar. Ability to boot on various file systems (xfs, ext4, ntfs, hfs+, raid, etc) GRUB 0.97
KDUMP Supports kdump on large memory based systems up to 3 TB Kdump doesn’t work properly with large RAM based systems.
System & Service Manager “Systemd”systemd is compatible with the SysV and Linux Standard Base init scripts it replaces. Upstart
Enable/Start Service the systemctl command replaces service and chkconfig.- Start Service : “systemctl start nfs-server.service”.

– Enable Service : To enable the service (example: nfs service ) to start automatically on boot : “systemctl enable nfs-server.service”.

Although one can still use the service and chkconfig commands to start/stop and enable/disable services, respectively, they

are not 100% compatible with the RHEL 7 systemctl command (according to redhat).

Using “service” command and “chkconfig” commands.- Start Service : “service start nfs” OR “/etc/init.d/nfs start”

– Enable Service : To start with specific runlevel : “chkconfig –level 3 5 nfs on”

Default Firewall “Firewalld (Dynamic Firewall)”The built-in configuration is located under the /usr/lib/firewalld directory. The configuration that you can customize is under the /etc/firewalld directory. It is not possible to use Firewalld and Iptables at the same time. But it is still possible to disable Firewalld and use Iptables as before. Iptables
Network Bonding “Team Driver”-/etc/sysconfig/network-scripts/ifcfg-team0

– DEVICE=”team0”

– DEVICETYPE=”Team”

“Bonding”-/etc/sysconfig/network-scripts/ifcfg-bond0

– DEVICE=”bond0”

Network Time Synchronization Using Chrony suite (faster time sync compared with ntpd) Using ntpd
NFS NFS4.1NFSv2 is no longer supported. Red Hat Enterprise Linux 7 supports NFSv3, NFSv4.0, and NVSv4.1 clients. NFS4
Cluster Resource Manager Pacemaker Rgmanager
Load Balancer Technology Keepalived and HAProxy Piranha
Desktop/GUI Interface GNOME3 and KDE 4.10 GNOME2
Default Database MariaDB is the default implementation of MySQL MySQL
Managing Temporary Files systemd-tmpfiles (more structured, and configurable, method to manage tmp files and directories). Using “tmpwatch”
References :-

Disk Woes

I hope to never use this document again but thought it worth documenting in case someone else has need of the information.  I powered my desktop off for a planned power outage.  When I powered it back on the system failed to boot reporting either “Error 17” or “Error 25”, in short the software raid (mirrored disks) were corrupted…  The timing of this event could not have been better.  The power outage included our data center, so I had to power over 100 systems on without my desktop!  Thank God for Live CDs!!  Following the power on there were other issues to deal with so it was almost a week before I could deal with my failed desktop.  Here is what I tried:

“sata to USB cable” since the drive was part of a raid pair this didn’t work and I didn’t waste a lot of time on it.  What it did help me discover was which disk was bad.

Knowing which disk was bad I then confirmed the failed drive using the BIOS and boot sequence on my desktop.  I confirmed it was /dev/sda that was failed.  I was able to get a replacement disk on the same size from our desktop support team.  With the new disk installed here is what I did and the results.

Boot the system to an Ubuntu Live CD

I don’t have time to add much description now but the commands and sequence should hopefully help for now.  Feel free to post a question in the comments if you have any.

sudo mdadm --query --detail /dev/md/1
sudo mdadm --assemble --scan
sudo mdadm --query --detail /dev/md/1
sudo mdadm --assemble 
sudo mdadm --assemble --scan

sudo mdadm --query --detail /dev/md/1
sudo mdadm --query --detail /dev/md/0
sudo mdadm --query --detail /dev/md/2
sudo mdadm --query --detail /dev/md/3

sudo mdadm --stop /dev/md/0
sudo mdadm --stop /dev/md/1
sudo mdadm --stop /dev/md/2
sudo mdadm --stop /dev/md/3

sudo mdadm --query --detail /dev/md/0
sudo mdadm --query --detail /dev/md/1
sudo mdadm --query --detail /dev/md/2
sudo mdadm --stop /dev/md/2
sudo mdadm --query --detail /dev/md/3
sudo mdadm --stop /dev/md/3

sudo fdisk -l

cat /proc/mdstat 
sudo mdadm --assemble --scan

cat /proc/mdstat 
sudo mount /dev/md3 /mnt
cat /proc/mdstat 
sudo mount /dev/sdb1 /mnt

sudo fdisk -l

sudo mdadm stop /dev/md/0n3

cat /proc/mdstat
sudo mdadm --manage /dev/md0 --fail /dev/sda1
sudo mdadm --manage /dev/md0 --fail /dev/sda
sudo mdadm --manage /dev/md1 --fail /dev/sda2
sudo mdadm --manage /dev/md2 --fail /dev/sda3
cat /proc/mdstat
sudo sfdisk -d /dev/sda > sda.out
sudo sfdisk -d /dev/sdb |sudo sfdisk /dev/sda
sudo sfdisk -d /dev/sda > sda.out

sudo fdisk -l
sudo mdadm --manage /dev/md0 --add /dev/sda1
sudo mdadm --manage /dev/md1 --add /dev/sda2
sudo mdadm --manage /dev/md2 --add /dev/sda3
sudo mdadm --manage /dev/md3 --add /dev/sda5
cat /proc/mdstat 
watch cat /proc/mdstat 

Every 2.0s: cat /proc/mdstat                                                                                                                                                                 Mon Aug 17 13:15:31 2015

Personalities : [raid1]
md0 : active raid1 sda1[2] sdb1[1]
      4093888 blocks super 1.1 [2/2] [UU]

md1 : active raid1 sda2[2] sdb2[1]
      819136 blocks super 1.0 [2/2] [UU]

md3 : active raid1 sda5[2] sdb5[1]
      278538048 blocks super 1.1 [2/1] [_U]
      [==============>......]  recovery = 70.4% (196127360/278538048) finish=15.0min speed=91334K/sec
      bitmap: 0/3 pages [0KB], 65536KB chunk

md2 : active raid1 sda3[2] sdb3[1]
      204668800 blocks super 1.1 [2/2] [UU]
      bitmap: 0/2 pages [0KB], 65536KB chunk

unused devices: <none>

Good Luck

 

 

 

 

Unresponsive VMware Images

Over the past week I have had two vmware images become unresponsive.  When trying to access the images via the vmware console any action reports:

rejecting I/O to offline device

A reboot fixes the problem, however for a Linux guy that isn’t exactly acceptable.  Upon digging a little deeper it appears the problem is with disk latency or more specifically a disk communication loss or time out with the SAN.  I looked at the problem with the vmware admin and we did see a latency issue.  We reported that to the storage team.  That however does not fix my problem.  What to do…  The real problem is that systems do not like I/O temporary loss of communication with their disks.  This tends to result in a kernel panic or in this case never ending I/O errors.

Since this is really a problem of latency (or traffic) there are a couple of things that can be done on the Linux system to reduce the chances of this happening while the underlying problem is addressed.

There are two things you can address, swappiness (freeing memory by writing runtime memory to disk aka swap).  The default setting it 60 out of 100, this generates a lot of I/O.  Setting swappiness to 10 works well:

vi /etc/sysctl.conf
vm.swappiness = 10

Unfortunately for me, my systems already have this setting (but I verified it) so that isn’t my culprit.

The only other setting I could think of tweaking was the disk timeout threshold.  If you check your systems timeout it is probably set to the default of 30:

cat /sys/block/sda/device/timeout
30

Increasing this value to 180 will hopefully be sufficient to help me avoid problems in the future.  You do that by adding an entry to /etc/rc.local:

vi /etc/rc.local
echo 180 > /sys/block/sda/device/timeout

I’ll see how things go and report back if I experience any more problems with I/O.

 UPDATE (24 Sep 2015):

The above setting while good to have did not resolve he issue.  Fortunately I was logged into a system when it began having the I/O errors and I was still able to perform some admin functions.  Poking around the system and digging in the system logs dmesgs at the same time led me to this vmware knowledge base article about linux 2.6 systems and disk timeouts:

I passed this on to our vmware team.  They dug deeper and determined that installing vmware tools would accomplish the same thing.  I installed vmware tools on the server and the problem went away!  It seems vmware tools hides certain disk events that linux servers are susceptible to.  There you go, hope that helps.

6GB free = 100% disk usage?!

What to do when you have plenty of available disk space but the system is telling you the disk is full?!  I was working on a server migration, moving 94GB of user files from the old server to the new server.  Since we aren’t planning on seeing a lot of growth on the new server, I provisioned a 100GB partition for the user files.  A perfect plan, right?…  So I thought.  After rsync’ing the user files, the new server was showing 100% disk usage:

Filesystem*            Size  Used Avail Use% Mounted on*
/dev/mapper/my_lv_name
                       99G   94G  105M 100% /user_dir

Given competing tasks, at first glance I only saw the 100%.  Naturally I assumed something went wrong with my rsync or I forgot to clear the target partition.  So I deleted everything from the target partition and rsyn’d again.  When the result was the same, it gave my brain pause to say…what?!

My first thought was that the block size was different for the two servers the old server block size was 4kB, perhaps the new server had a larger block size.  As we joked, to much air in the files!  Turns out, using the following commands, the block size was the same on both systems:

usage:
blockdev --getbsz partition
# blockdev --getbsz /dev/mapper/my_lv_name 
4096

So the block size of the file system on both servers is 4kB.

I started digging through the man pages of tune2fs and dumpe2fs (and google) to see if I could figure out what was consuming the disk space.  Perhaps there was a defunct process that was holding the blocks )like from a deletion), there wasn’t.  In my research I found the root cause.  New ext2/3/4 partitions set a 5% reserve for file system performance and to insure available space for “important” root processes.  Not a bad idea for the root and var partitions but this approach doesn’t make sense in most other use cases, in this case user data.

Using the tune2fs command we can see what the “Reserved block count” like this:

tune2fs -l /dev/mapper/vg_name-LogVol00

The specific lines we are interested in are:

Block count:              52165632
Reserved block count:     2608282

These lines show that there is a 5% reserve on the disk/Logical Volume.  We fix this with this command:

tune2fs -m 1 /dev/mapper/vg_name-LogVol00

This reduces the reserve to 1%.  The resulting Reserved block count reflects this 1%

Block count:              52165632
Reserved block count:     521107

While this situation is fairly unique, hopefully this will at the least answer your questions and help you better understand the systems you manage.

*The names in the above have been changed to protect the innocent.

Cleaning Up Memory Usage

I noticed my Ubuntu desktop was using a rather large portion of available memory.  I usually have a lot running on my system, multiple terminals, background jobs, etc so this is nothing unusual.  Today however I noticed my system was sluggish so I started digging.  Memory use was near 100%.  I closed all of my programs to see what effect that would have but the memory usage stayed very high ~90%.  I started to suspect a memory leak in one of the processes or programs I was running.  I really didn’t want to reboot the system since it isn’t a Windows desktop!  What to do.  I needed to force memory cleanup on the system.  How do I analyze the memory usage on a system?  I thought I would document a few of the ways to see memory use.

You can use commands like ‘top’ and ‘vmstat’ to get an idea of what your system is chewing on.  Specifically looking at memory I tend to use:

watch -n 1 free -m

For a more detailed look use:

watch -n 1 cat /proc/meminfo

If you suspect a program of having a leak you can use valgrind to dig even deeper:

valgrind --leak-check=yes program_to_test

‘valgrind’ is great for testing however not to helpful with currently running processes or without some experience.

So you analyze the system and determine there is memory that has not been properly freed, what do you do?  You can reboot but that isn’t always an option.  You can force clear the cache doing the following:

sudo sysctl -w vm.drop_caches=3

This frees up unused but claimed memory in Ubuntu a (and most linux flavors).  This command won’t affect system stability and performance, it will just clean up memory used by the Linux Kernel on caches.  That said I have noticed the system is more responsive (contradiction, you decide).  Here is an example of how much memory you can free up with this command:

$ free
             total       used       free     shared    buffers     cached
Mem:      16287672   15997176     290496       5432     404120   14415648
-/+ buffers/cache:    1177408   15110264
Swap:      4093884          0    4093884
[msaba@nfc ~]$ sudo sysctl -w vm.drop_caches=3
[sudo] password for msaba: 
vm.drop_caches = 3
[msaba@nfc ~]$ free
             total       used       free     shared    buffers     cached
Mem:      16287672     948076   15339596       5432       1268      92708
-/+ buffers/cache:     854100   15433572
Swap:      4093884          0    4093884

Another command that can free up used or cached memory (inodes, page cache, and ‘dentries’):

sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches

I have not seen any significant difference between the results of this or the first command.

I’ll add updates to this page as I think of them.  Good luck for now.

Denyhosts Assists

Every so often a legitimate user will get blocked by deny hosts.  When this happens you can re-enable their access with these 8 simple steps (UPDATE: or use the faster version, see below):

  1. Stop DenyHosts
    # service denyhosts stop
  2. Remove the IP address from /etc/hosts.deny
  3. Edit /var/lib/denyhosts/hosts and remove the lines containing the IP address.
  4. Edit /var/lib/denyhosts/hosts-restricted and remove the lines containing the IP address.
  5. Edit /var/lib/denyhosts/hosts-root and remove the lines containing the IP address.
  6. Edit /var/lib/denyhosts/hosts-valid and remove the lines containing the IP address.
  7. Edit /var/lib/denyhosts/users-hosts and remove the lines containing the IP address.
  8. Consider adding the IP address to /etc/hosts.allow
    sshd:  IP_Address
  9. Start DenyHosts
    # service denyhosts start

That’s it, your user should be able to access the server again.

The above process was a bit tedious however I am leaving it there because it gives details about what files are involved.  Since doing the above is time consuming here is what I have been doing that is much easier:

  1. Stop DenyHosts
    # service denyhosts stop
  2. Remove the IP address from /etc/hosts.deny
    1. # sed -i '/IP_ADDRESS/d' /etc/hosts.deny
  3. Remove all entries found under /var/lib/denyhosts/ containing the IP address.
    1. # cd /var/lib/denyhosts
      # for i in *hosts*;do sed -i '/IP_ADDRESS/d' "$i";done
  4. Consider adding the IP address to /etc/hosts.allow
    sshd:  IP_Address
  5. Start DenyHosts
    # service denyhosts start