Empulse Group a collection of notes from a sys admin, musician, and father

14Aug/12Off

Here is a small bash script that will create a user from a list that you can pipe it complete with setting and printing a random password. This was for mail users so I set the shell to false.

#!/bin/bash

while read USER; do
   PASS=`< /dev/urandom tr -dc A-Za-z0-9_ | head -c8`
   /usr/sbin/useradd -s /bin/false $USER
   echo $PASS | passwd $USER --stdin
   echo "User: $USER / Pass: $PASS"
done;
12Aug/12Off

VIM tips

Here are tips for moving around in VIM while in command mode:
press 0 (zero) to jump to the beginning of the line
press $ to jump to the end of the line
press :7 to jump to line 7
press gg to jump to the beginning of the file
press G to jump to the end of the file
press dd to delete the current line
press u to undo
press control+r to redo
press yy to yank (copy) the current line
press P to put (paste) before the current line
press p to put after the current line
press /string to search for "string" in the file, going forward
press ?string to search for "string" in the file, going backwards
press n to repeat search in same direction
press N to repeat search in reverse direction
press :noh to clear you search to remove highlighting
press :%/search/replace/g to replace each instance of "search" with "replace"
press A to insert after the end of line


Tagged as: , , Comments Off
12Aug/12Off

file count BASH script

#!/bin/bash
# 20120812
# by: eric hernandez
#
# usage: ./file_count [path]
#
for DIR in `find $1 -type d`; do
   COUNT=0
   for FILE in $DIR/*; do
      if [ -f $FILE ]; then
         ((COUNT++))
      fi
   done
   echo -e "$COUNT \t $DIR"
done
Tagged as: , Comments Off
6Feb/12Off

LVM expand

Use 'parted' to partition the new array with a single LVM physical partition.

[root@www ~]# parted
(parted) mklabel gpt
(parted) mkpart
Start? 0
End? -1Partition name?  []? primary
File system type?  [ext2]?
Start? 0
End? -1
(parted) print
(parted) set 1 lvm on
(parted) print
(parted) quit
[root@www ~]# pvcreate /dev/sdb1

Expand the / logical volume into the new space in the volume group.

[root@www ~]# vgextend vglocal20120206 /dev/sdb1
[root@www ~]# lvextend /dev/vglocal20120206/root00 /dev/sdb1

Grow the file system with tune2fs or ext2online.

[root@www ~]# resize2fs /dev/mapper/vglocal20120206-root00
30May/110

gnuplot

gnuplot is an easy to use bash tool to create graph images on any data you can throw at it.

 

gnuplot homepage

 

For my test, I am taking a 12 hour period of sar logs.

[root@www gnuplot_tests]# head tail.log
05/29/2011 01:10:01 PM  22195880   2475844     10.04    152596   1857972   2096472         0      0.00         0
05/29/2011 01:20:01 PM  22183664   2488060     10.08    153788   1867836   2096472         0      0.00         0
05/29/2011 01:30:01 PM  22183092   2488632     10.09    154196   1868708   2096472         0      0.00         0
05/29/2011 01:40:01 PM  22180212   2491512     10.10    155188   1869504   2096472         0      0.00         0

 

And the script to create a .png image. To run:  ./sar_plot.pg > graph.png

#!/usr/bin/gnuplot
reset
set terminal png

## set size w,h
set size 2,1

set xdata time
set timefmt "%d/%m/%Y %H:%M:%S"
set format x "%H:%M"
set xlabel "time"

set ylabel "% RAM used"

set title "Server RAM Usage"
set key reverse Left outside
set grid

set style data linespoints

#plot "tail.log" using 1:2, "tail.log" using 1:4
plot "tail.log" using 1:6



15May/110

apachedump.pl

#!/usr/bin/perl
#
# apachedump.pl
# ver 0.1.0
# Eric Hernandez
# 20110516.1221
#
# Will dump out VirtualHosts found in the Apache httpd.conf file
# as well as looking for Included files and the VirtualHost blocks
# within them.
#

use File::Basename;

my $count = 1;
my $show_includes = 0;
my $show_listen = 0;

# GET THE LAST ARGUMENT AS FILENAME
my $conf_file = $ARGV[$#ARGV];
open (FILE, $conf_file);

# LOAD httpd.conf FILE IN TO @ARRAY
@file_line = &lt;FILE&gt;;
close (FILE);

# START SCRIPT BY PRINTING BANNER AND LOG FILE
print "apachedump.pl\n\n";
#print "Main File: " . $conf_file . "\n";
# CALL TO MAIN FUNCTION
master(@file_line);

#
# SUBROUTINES
#
sub master {

my @line = @_;
my $in_vhost_flag = 0;

for (my $i = 0; $i &lt;= $#line; $i++) {
if ($line[$i] =~ /^Include/) {
# IF AN Include DIRECTIVE IS FOUND and THEN WE NEED TO OPEN THAT FILE
@tmp = split(/ /, $line[$i]);
my $name = "/etc/httpd/" . $tmp[1];
include_file($name);
}
if ($line[$i] =~ /^Listen/ &amp;&amp; $show_listen == 1) {
print $line[$i] . "\n";
}
if ($line[$i] =~ /^&lt;VirtualHost/) {
print "(" . $count . ")" . " Virtual Host block found";
my $tmp = $line[$i];
$tmp =~ s/\&lt;//;
$tmp =~ s/\&gt;//;
$tmp =~ s/VirtualHost//;
$tmp =~ s/ //;
chomp $tmp;
print " listening on " . $tmp . "\n";

$in_vhost_flag = 1;
}
if ($line[$i] !~ /^&lt;\/VirtualHost&gt;/ &amp;&amp; $in_vhost_flag == 1) {
# HERE WE ARE IN THE VHost BLOCK
# print $line[$i];
if ($line[$i] =~ /[^#](DocumentRoot|ServerName|ServerAlias)/) {
print $line[$i];
}
}
if ($line[$i] =~ /^&lt;\/VirtualHost/) {
# print $line[$i];
$in_vhost_flag = 0;
print "\n";
$count = $count + 1;
}
}

}

sub include_file() {
my $name = $_[0];

# GET FILES IN THE INCLUDED DIRECTORY and SEND TO main() FUNCTION
chomp $name;

opendir (DIR, "/etc/httpd/conf.d/") or die $!;

my $base = basename($name);

$base =~ s/\./\\\./; # TO CHANGE "." TO "\."
$base =~ s/\*/\.\*/; # TO CHANGE "*" TO ".*"
$base = $base . "\$"; # now should be ".*\.conf$"

while (my $file = readdir(DIR)) {
if ($file =~ /$base/) {
my $file = dirname($name) . "/" . $file;
chomp $file;

if ($show_includes == 1) {
print "\nIncluded File: " . $file . "\n";
}

open (INC_FILE, $file) or die "Can't open $f : $!";;
my @include = &lt;INC_FILE&gt;;
close(INC_FILE);
master(@include);
}
}

}
Tagged as: , , No Comments
15May/110

rsync

To sync contents of two directories using rsync.

 

Access via remote shell:

Pull: rsync [OPTION...] [USER@]HOST:SRC... [DEST]

Push: rsync [OPTION...] SRC... [USER@]HOST:DEST

 

-v, --verbose increase verbosity

-a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)

-u, --update skip files that are newer on the receiver

--existing skip creating new files on receiver

--ignore-existing skip updating files that exist on receiver

-z, --compress compress file data during the transfer

 

 

 

SYNC: pull, then push data

 

PULL: rsync -avz --ignore-existing test1@empulsegroup.com:/home/test1/Documents .

PUSH: rsync -avz Documents test1@empulsegroup.com:/home/test1

 

15May/110

using ‘diff’ to compare files or directories

Find differences between two files or directories.

 

 

# diff sample1 sample2

2,3c2,3

< sample text. I

< will not forget

---

> EXTREMELY sample text. I

> will not EVER WANT

4a5

> OR BE BAD

6d6

< Good bye,

 

 

# cat sample1

This is my

sample text. I

will not forget

to write another message,

for my friends.

Good bye,

and thanks for all the fish!

 

 

# cat sample2

This is my

EXTREMELY sample text. I

will not EVER WANT

to write another message,

OR BE BAD

for my friends.

and thanks for all the fish!

 

 

Explanation:

 

(line number of range) (c, a, or d) (line number of range)

 

'c' - stands for a change in the line

'a' - stands for append after the line

'd' - stands for delete the line

 

 

:: For side by side comparison:

 

# diff -y file1 file2

 

'|' - stands for a change between the lines

'>' - stands for an addition of text from file2 that was not in file1

'<' - stands for a deletion of text from file1 since not found in file2

 

 

:: To create a patch file from the diff:

 

# diff file1 file2 > patch

 

 

:: To patch the original file:

 

# patch file1 -i patch -o updatedfile

 

# if diff sample1 sample1; then echo "files are the same"; else echo "files are DIFFERENT"; fi

files are the same

 

 

:: To compare directories:

 

# ls tmp1

sample1 sample2

 

# ls tmp2

file1 file2 sample1

 

# diff tmp1 tmp2

Only in tmp2: file1

Only in tmp2: file2

Only in tmp1: sample2

8May/110

RHCS: Setting up MySQL and NFS cluster



Red Hat Cluster Suite

Setting up MySQL and NFS cluster

http://www.redhat.com/cluster_suite/

This document is meant to be a guide to setting up MySQL and NFS cluster services with Red Hat Cluster Suite. A training environment is provided at training.racktools.us.

Axios Articles:

Configure Hostnames and Network:

Note that you will give up DRAC access to the cluster in our setups for fencing. Nic bonding is only for redundancy of the interfaces. In our VM training we do not currently have a way to fence devices the way DRAC would offer.


  • hostname server1.domain.com


  • /etc/sysconfig/network


  • /etc/hosts

Configure SAN:

Usually you will have SAN LUNs for NFS and MySQL services. In this training you will use devices /dev/sdb and /dev/sdc for storage.


  • create new partition with fdisk


  • refresh partition tables on both servers: partprobe


  • format to ext3 with mkfs.ext3

  • turn off fsck schedule with: tune2fs -c 0 -i 0d /dev/sdb1


Install software:


  • yum install cman rgmanager system-config-cluster fontconfig xorg-x11-fonts-Type1 xorg-x11-xauth perl-Crypt-SSLeay

Configure locations:


  • mkdir -p /san/mysql-fs
  • mkdir -p /san/nfs-fs

MySQL:

At this point we want to get MySQL running on the SAN mount, or /dev/sd{b,c} partition in this case, and create symlinks from the original directory location to the mount point.


  • Move /var/lib/{mysql,mysqllogs,mysqltmp} from one server to
    the SAN partition


  • Move /etc/my.cnf from one server to the SAN

  • symlink directories and my.cnf fron SAN to original locations
    on BOTH servers: ln -s /san/mysql-fs/mysql /var/lib/mysql


NFS:

Portmap and NFS services need to be running on each node in order for NFS cluster services to start.


  • service portmap start; chkconfig portmap on;
  • service nfs start; chkconfig nfs on;
  • echo "portmap: 10.0.0.0/255.0.0.0" >> /etc/hosts.allow

System-config-cluster:

You will create the cluster.conf file with the GUI tool 'system-config-cluster'. To enable X11 forwarding SSH to the server with the "-Y" or "-X" option.

# system-config-cluster

You will first be asked to name the cluster.

Now, with an empty configuration, you can start by adding your cluster nodes based on hostnames.

Cluster Nodes:

Click on the Cluster Nodes heading and then click the button "+Add a Cluster Node". Enter server name and set Quorum Votes to '1'. Do this for each node.

Fence Devices:

Now set up fencing for each server. Fencing is the disconnection of a node from shared storage. A fence device is a hardware device that can be used to cut a node off from shared storage. In our case we use DRAC as our fencing agent.

Source: https://access.redhat.com/kb/docs/DOC-30004

Click on the Fence Devices section and then click the button "+Add a fence device".

From the drop down list, select DRAC. The login details for DRAC in our environment are on the training page. In our Rackspace configs you would use the normal DRAC credentials.

Now, with the fencing deivces entered you need to set up fencing on each cluster node.

Under the Cluster Nodes section, highlight the first cluster node and then click the button "Managed Fencing For This Node".

Highlight the cluster node name and click the button "+Add a New Fence Level".

Now, highlight the new Fence-Level-1 and click the button "+Add a New Fence to this Level". Here you select the respective fencing device.

Managed Resources

Under the Resources section you will create resources for both the MySQL and NFS clusters. These resources are ip address, file systems, MySQL conf file, and NFS export and client settings.

Failover Domains

Set up a failover domain for each cluster service, MySQL and NFS.

If you want each server to be responsible for a particular service check "Prioritized List" and adjust priority of the cluster nodes inversely between failover domains.

Resources:

First, set up the ip address for the cluster services. We will setup the MySQL cluster first.

Set up a resource for the MySQL file system. In our environment we are using /dev/sdb and /dev/sdc disks, but in our Rackspace cluster configs we would usually SAN luns presented as /dev/emcpowerb, etc.

Set up the MySQL configuration file, /etc/my.cnf, which will be symlinked on each server to the SAN mount or /dev/sd{b,c} in our environment.

Set up the ip address for NFS cluster.

Set up the NFS file system.

Now, the NFS export

The last resource will be the NFS client. Target will be the network you want to allow. Path is not optional at all and needs to be set. You can create more NFS client resources for each network you with.

Services:

Now that we have all of our resources set we need to chain them together to make each cluster service.

Under the Failover Domain drop down select the respective failover domain.


  • click "+Add a Shared Resource to this service" and select the ip address for the MySQL cluster service.
  • highlight the ip address you just added and click "+Add a Shared Resource to the selection" and select the MySQL file system.
  • highlight the file system resource, click "+Add a Shared Resource to the selection" and select the MySQL server.

The final cluster config tree should look similar to this:

Start services:


  • for i in cman rgmanager; do service $i start; chkconfig $i on; done

Commands:


  • 'clustat' ~= Will show the status of the cluster
  • 'clusvcadm -R mysql-svc' ~= Will restart MySQL in place on the same server
  • 'clusvcadm -r mysql-svc -m ' ~= Will relocate MySQL to that node
  • 'clusvcadm -d mysql-svc' ~= Will disable MySQL
  • 'clusvcadm -e mysql-svc' ~= Will enable MySQL
  • Note: Cluster messages are logged to /var/log/messages.


Filed under: Linux No Comments
22Feb/110

Tricks with iptables

Use iptables to force mail out a specific ip address:

[root@www ~]# iptables -t nat -A POSTROUTING -p tcp --dport 25 -j SNAT --to-source 192.168.100.123

Rate limit port 80, 100 connection limit:

[root@www ~]# iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 100 -j DROP