rhel 6

yum Invalid System Credential error

I ran across the following yum error after migrating a system from being a client of Satellite 5.6 to Satellite 6.1.  First here is the error:

# yum update
Loaded plugins: package_upload, priorities, rhnplugin, search-disabled-repos, security, subscription-manager
There was an error communicating with RHN.
RHN Satellite or RHN Classic support will be disabled.

Error Message:
    Please run rhn_register as root on this client
Error Class Code: 9
Error Class Info: Invalid System Credentials.
Explanation: 
     An error has occurred while processing your request. If this problem
     persists please enter a bug report at bugzilla.redhat.com.
     If you choose to submit the bug report, please be sure to include
     details of what you were trying to do when this error occurred and
     details on how to reproduce this problem.

Setting up Update Process
rhel-6-server-rpms                                                                                                                                                            | 2.0 kB     00:00     
rhel-6-server-satellite-tools-6.1-rpms                                                                                                                                        | 2.1 kB     00:00     
No Packages marked for Update
This left me scratching my head for a few and a quick search didn’t produce much so I thought I should document this for prosperity.
The problem was with the contents of the file /etc/yum/pluginconf.d/rhnplugin.conf
Part of my transition is running this command:
sed -i -e 's/enabled=1/enabled=0/g' /etc/yum/pluginconf.d/rhnplugin.conf
The problem was unlike all of my other systems, this file must have been edited because instead of containing “enabled=1” it contained “enabled = 1”
To correct that I modified my sed command to ignore white space:
sed -i -e 's/enabled\s*=\s*1/enabled=0/g' /etc/yum/pluginconf.d/rhnplugin.conf

More details can be found using the yum.conf man page.

Hope that is helpful.

 

Rhel6, Rhel7 Comparison

Moving from Redhat 6 to Redhat 7.  There are a *lot* of differences to get use to.  It is like having a friend come over and rearrange your entire house, including all the closets and cupboards!! You know it is your house, you just can’t seem to find any of your stuff!

Features RHEL 7 RHEL 6
Default File System XFS EXT4
Kernel Version 3.10.x-x kernel 2.6.x-x Kernel
Kernel Code Name Maipo Santiago
General Availability Date of First Major Release 2014-06-09 (Kernel Version 3.10.0-123) 2010-11-09 (Kernel Version 2.6.32-71)
First Process systemd (process ID 1) init (process ID 1)
Runlevel runlevels are called as “targets” as shown below:runlevel0.target -> poweroff.target

runlevel1.target -> rescue.target

runlevel2.target -> multi-user.target

runlevel3.target -> multi-user.target

runlevel4.target -> multi-user.target

runlevel5.target -> graphical.target

runlevel6.target -> reboot.target

/etc/systemd/system/default.target (this by default is linked to the multi-user target)

Traditional runlevels defined :runlevel 0

runlevel 1

runlevel 2

runlevel 3

runlevel 4

runlevel 5

runlevel 6

and the default runlevel would be defined in /etc/inittab file.

/etc/inittab

Host Name Change with the move to systemd, the hostname variable is defined in /etc/hostname. In Red Hat Enterprise Linux 6, the hostname variable was defined in the /etc/sysconfig/network configuration file.
Change In UID Allocation By default any new users created would get UIDs assigned starting from 1000.This could be changed in /etc/login.defs if required. Default UID assigned to users would start from 500.
This could be changed in /etc/login.defs if required.
Max Supported File Size Maximum (individual) file size = 500TBMaximum filesystem size = 500TB(This maximum file size is only on 64-bit machines. Red Hat Enterprise Linux does not support XFS on 32-bit machines.) Maximum (individual) file size = 16TBMaximum filesystem size = 16TB(This maximum file size is based on a 64-bit machine. On a 32-bit machine, the maximum files size is 8TB.)
File System Check “xfs_repair”XFS does not run a file system check at boot time. “e2fsck”File system check would gets executed at boot time.
Differences Between xfs_repair & e2fsck “xfs_repair”- Inode and inode blockmap (addressing) checks.- Inode allocation map checks.

– Inode size checks.

– Directory checks.

– Pathname checks.

– Link count checks.

– Freemap checks.

– Super block checks.

“e2fsck”- Inode, block, and size checks.- Directory structure checks.

– Directory connectivity checks.

– Reference count checks.

– Group summary info checks.

Difference Between xfs_growfs & resize2fs “xfs_growfs”xfs_growfs takes mount point as arguments. “resize2fs”resize2fs takes logical volume name as arguments.
Change In File System Structure /bin, /sbin, /lib, and /lib64 are now nested under /usr. /bin, /sbin, /lib, and /lib64 are usually under /
Boot Loader GRUB 2Supports GPT, additional firmware types, including BIOS, EFI and OpenFirmwar. Ability to boot on various file systems (xfs, ext4, ntfs, hfs+, raid, etc) GRUB 0.97
KDUMP Supports kdump on large memory based systems up to 3 TB Kdump doesn’t work properly with large RAM based systems.
System & Service Manager “Systemd”systemd is compatible with the SysV and Linux Standard Base init scripts it replaces. Upstart
Enable/Start Service the systemctl command replaces service and chkconfig.- Start Service : “systemctl start nfs-server.service”.

– Enable Service : To enable the service (example: nfs service ) to start automatically on boot : “systemctl enable nfs-server.service”.

Although one can still use the service and chkconfig commands to start/stop and enable/disable services, respectively, they

are not 100% compatible with the RHEL 7 systemctl command (according to redhat).

Using “service” command and “chkconfig” commands.- Start Service : “service start nfs” OR “/etc/init.d/nfs start”

– Enable Service : To start with specific runlevel : “chkconfig –level 3 5 nfs on”

Default Firewall “Firewalld (Dynamic Firewall)”The built-in configuration is located under the /usr/lib/firewalld directory. The configuration that you can customize is under the /etc/firewalld directory. It is not possible to use Firewalld and Iptables at the same time. But it is still possible to disable Firewalld and use Iptables as before. Iptables
Network Bonding “Team Driver”-/etc/sysconfig/network-scripts/ifcfg-team0

– DEVICE=”team0”

– DEVICETYPE=”Team”

“Bonding”-/etc/sysconfig/network-scripts/ifcfg-bond0

– DEVICE=”bond0”

Network Time Synchronization Using Chrony suite (faster time sync compared with ntpd) Using ntpd
NFS NFS4.1NFSv2 is no longer supported. Red Hat Enterprise Linux 7 supports NFSv3, NFSv4.0, and NVSv4.1 clients. NFS4
Cluster Resource Manager Pacemaker Rgmanager
Load Balancer Technology Keepalived and HAProxy Piranha
Desktop/GUI Interface GNOME3 and KDE 4.10 GNOME2
Default Database MariaDB is the default implementation of MySQL MySQL
Managing Temporary Files systemd-tmpfiles (more structured, and configurable, method to manage tmp files and directories). Using “tmpwatch”
References :-

To reboot or not to reboot?

You have patches to apply, we all know that if there are kernel patches that you need to (or at least should) restart/reboot the server.  But what about other packages?  There are a few non-kernel patches which can cause havoc if you spply them and do not reboot the server.  The biggest package that most people miss are libraries, specifically libraries used by the system, like glibc.  When the system is running it loads the libraries it needs into memory, updating does not force a reload of those libraries. Therefore after patching you will have the old version in memory and the new version on disk.  When a new subroutine or kernel process is called it will load the new version into memory, this is where the fun can start.  I say fun because you can see some really strange behavior.  Perhaps you have and in frustration rebooted, problem solved but you are perplexed, well now you know.

Since I deal mostly with Redhat these days here are the packages that require/highly recommend a reboot of the server.  (Caveat: If you can reload what is in memory you do not need to reboot.  This is what we do with services like tomcat or apache after a patch and that removes the old packages from memory and loads the new.)

While we all want to avoid interruptions to system uptime, when updating these packages a reboot is required.  Remember to use your own discretion as this list is provided as an informational guide only.  Redhat could introduce changes that increase or decrease this list.  You may be using packages not considered or functionality not examined.

Red Hat Enterprise Linux 5:

  • kernel
  • kernel-smp
  • kernel-PAE
  • kernel-xen
  • glibc
  • hal

Red Hat Enterprise Linux 6:

  • kernel
  • *-firmware-*
  • glibc
  • hal

Red Hat Enterprise Linux 7:

  • kernel
  • glibc
  • linux-firmware
  • systemd
  • udev

Remember if you don’t have to reboot you should restart the updated service.  Good Luck.

OCI on RHEL6

Our developers had to have OCI.  Now that I got that out of the way. 😉

We use Oracle as our DB for most applications (calm down, like you couldn’t have figured that out).  In setting up a new application server for a custom application it came to my attention that the application used oci calls.  What a pain to get working on Red Hat!  There is a ton of documentation for Oracle Linux, but that wasn’t an option.  So here is what I had to do to get things working.

yum install php-pecl-apc php-pear gcc php-devel glibc glibc-devel

rpm -iv oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm 
pear download pecl/oci8
tar -xvzf oci8-1.4.10.tgz
./configure --with-oci8=shared,instantclient,/apps/oracleClient/oraInventory/product/11.1.0/client_1
cd oci8-1.4.10
./configure --with-oci8=shared,instantclient,/usr/lib/oracle/11.2/client64
rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64.rpm
echo '/usr/lib/oracle/11.2/client64/lib' > /etc/ld.so.conf.d/oracle-instantclient-x86_64.conf
ldconfig
yum install php-devel
wget http://pecl.php.net/get/oci8-2.0.2.tgz
pecl install oci8-2.0.2.tgz 
echo 'extension=oci8.so' > /etc/php.d/oci8.ini
/etc/init.d/httpd restart

The Root of Missing Mail

Like all conscientious system administrator I like to keep tabs on my servers.  One way of doing this is checking root’s email daily.  This is a great idea if you have a few servers and never take vacation!  I manage close to 100 servers, so I need a more efficient way of “hearing” my servers when they complain to root about something.  Aside from monitoring solution (not covered here) the best way to do this is to redirect where email for the root user gets sent.

This seems pretty simple so I never thought of posting about this, until today.  Some facts , to forward mail for the root user leverage the /etc/aliases file.  Like always I added a line to /etc/aliases like this:

# vi /etc/aliases

     root:    myemailaddress@uconn.edu

Ideally you want to set the email address to a list serve so that your backup administrator receives these messages also, so you can take a vacation.

I made that change yesterday on a new server and didn’t give it a second thought.  Today no mail, and I know there was an error on the system?!

First thing I checked was if I could send mail from the server, I could have…or I just forgot because I am sleep deprived…  I was able to send mail from the command line to an email address but not to an alias.  OK, that is a big clue.

While I have never had to do this before, (perhaps I restarted all my other systems?), regardless to fix the problem I simply ran this command:

# newaliases

Bingo, mail started flowing!

If that doesn’t fix it for you, other things to check are:

– Include the following in your /etc/hosts.allow:

ALL: 127.0.0.1 : allow

 

memcached

In support of the Kuali project.

Setting up true fail over for the Kuali application servers.  Currently if a node went down, the user would need to re-authenticate.  The following procedure configures the system so it can lose a node and the users on that node will not lose their session.

My part on the system side was fairly straightforward:

yum install memcached
iptables -I INPUT -m state --state NEW -m tcp -p tcp --dport 11211 -j ACCEPT
service iptables save
chkconfig memcached on
service memcached start

With that configured the work to enable tomcat to leverage memcached can begin:

Parts of the following information was found at (www.bradchen.com)

Download the most recent copy of the following jars (links provided) and install them to the tomcat_dir/lib directory:

For each jar, open tomcat_dir/conf/context.xml, and add the following lines inside the <Context> tag:

<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
    memcachedNodes="n1:localhost:11211"
    requestUriIgnorePattern=".*.(ico|png|gif|jpg|css|js)$" />

If memcached is listening on a different port, change the value in memcachedNodes.  port 11211 is the default port for memcached.

Open tomcat_dir/conf/server.xml, look for the following lines:

<Server port="8005" ...>
    ...
    <Connector port="8080" protocol="HTTP/1.1" ...>
    ...
    <Connector port="8009" protocol="AJP/1.3" ...>

Change the ports, so the two installations listen to different ports. This is optional, but I would also disable the HTTP/1.1 connector by commenting out its <Connector> tag, as the setup documented here only requires the AJP connector to be enabled.

Finally, look for this line, also in tomcat_dir/conf/server.xml:

<Engine name="Catalina" defaultHost="localhost" ...>

Add the jvmRoute property, and assign it a value, that is different between the two installations. For example:

<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1" ...>

And, for the second instance:

<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm2" ...>

That’s it for Tomcat configuration. This configuration uses memcached-session-manager’s default serialization strategy and enables sticky session support. For more configuration options, refer to the links in the references section.

In our apache load balancer we add the following definition:

ProxyPass /REFpath balancer://Cluster_Name
ProxyPassReverse /REFpath balancer://Cluster_Name

<Proxy balancer://Cluster_Name>
   BalancerMember ajp://HOSTNAME:8009/REFpath route=jvm1  timeout=600 min=10 max=100 ttl=60 retry=120 connectiontimeout=10
   BalancerMember ajp://HOSTNAME:8009/REFpath route=jvm2  timeout=600 min=10 max=100 ttl=60 retry=120 connectiontimeout=10
   BalancerMember ajp://HOSTNAME:8009/REFpath route=jvm3  timeout=600 min=10 max=100 ttl=60 retry=120 connectiontimeout=10
   BalancerMember ajp://HOSTNAME:8009/REFpath route=jvm4  timeout=600 min=10 max=100 ttl=60 retry=120 connectiontimeout=10
   ProxySet lbmethod=byrequests
   ProxySet stickysession=JSESSIONID|jsessionid
   ProxySet nofailover=On
</Proxy

Note that the BalancerMember lines point to the ports and jvmRoutes configured above.  This sets up a load balancer that dispatches web requests to multiple Tomcat installations. When one of the Tomcat instance gets shutdown, requests will be served by the other one that is still up. As a result, user does not experience downtime when one of the Tomcat instances is taken down for maintenance or application redeployment.

This step also sets up sticky session. What this means is that, if user begins session with instance 1, she would be served by instance 1 throughout the entire session, unless of course this instance goes down. This can be beneficial in a clustered environment, as application servers can use session data stored locally without contacting a remote memcached.

Increasing the size of a filesystem

 

fdisk -l
fdisk /dev/sdc

In fdisk

c
p  (print the partition table to make sure the disk is not in use)
n (new partition)
p (primary partition)
1 (give it a number 1-4, then set start and end sectors)
w (write table to disk and exit)

Now create a physical volume, add it to the VG, extend the LV and then the file system.

pvcreate /dev/sdc1
vgextend VG_NAME /dev/sdc1
lvextend -L+5G LV_PATH (i.e.: /dev/VG_NAME/LV_NAME)
resize2fs LV_PATH
(OR if using xfs: xfs_grow LV_PATH)

Done.

Other useful commands when working with disks include:

# lsblk
NAME                             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sr0                               11:0    1  1024M  0 rom  
sda                                8:0    0 501.1M  0 disk 
└─sda1                             8:1    0   500M  0 part /boot
sdb                                8:16   0  29.5G  0 disk 
└─sdb1                             8:17   0  29.5G  0 part 
  ├─vg_name-lv_root (dm-0) 253:0    0  40.6G  0 lvm  /
  └─vg_name-lv_swap (dm-1) 253:1    0   3.7G  0 lvm  [SWAP]
sdc                                8:32   0    20G  0 disk

The lsblk will list all block devices.  Above it is an easy way to see disks, disk usage and LVM affiliations.  Of course if you just want the block device names this will work too:

ls /sys/block/* | grep block | grep sd

 

Extended ACLs

To remove permanently ACL from a file:

# setfacl -bn file.txt

To remove permanently ACL from an entire directory:

# setfacl -b --remove-all directory.name

To overwrite permissions, setting them to rw for files and rwx for dirs

$ find . ( -type f -exec setfacl -m g:mygroup:rw '{}' ';' ) 
      -o ( -type d -exec setfacl -m g:mygroup:rwx '{}' ';' )

To set mygroup ACL permissions based on existing group permissions

$ find . ( -perm -g+x -exec setfacl -m g:mygroup:rw '{}' ';' ) 
      -o ( -exec setfacl -m g:mygroup:rwx '{}' ';' )

You’ll probably want to check that the group mask provides effective permissions. If not you can do it the old school way and run this too:

$ find . -type d -exec chmod g+rwX '{}' ';'

.

X11 error on login to RedHat Servers

I noticed that since the last set of patches many of my redhat 6 systems are reporting an X11 forwarding error after login:

X11 forwarding request failed on channel 0

To correct this problem you need to install the following package

yum install xorg-x11-xauth

I have not had the time to investigate why this is suddenly a problem.  When I have time I’ll report back the why.