Empulse Group a collection of notes from a sys admin, musician, and father

30May/110

gnuplot

gnuplot is an easy to use bash tool to create graph images on any data you can throw at it.

 

gnuplot homepage

 

For my test, I am taking a 12 hour period of sar logs.

[root@www gnuplot_tests]# head tail.log
05/29/2011 01:10:01 PM  22195880   2475844     10.04    152596   1857972   2096472         0      0.00         0
05/29/2011 01:20:01 PM  22183664   2488060     10.08    153788   1867836   2096472         0      0.00         0
05/29/2011 01:30:01 PM  22183092   2488632     10.09    154196   1868708   2096472         0      0.00         0
05/29/2011 01:40:01 PM  22180212   2491512     10.10    155188   1869504   2096472         0      0.00         0

 

And the script to create a .png image. To run:  ./sar_plot.pg > graph.png

#!/usr/bin/gnuplot
reset
set terminal png

## set size w,h
set size 2,1

set xdata time
set timefmt "%d/%m/%Y %H:%M:%S"
set format x "%H:%M"
set xlabel "time"

set ylabel "% RAM used"

set title "Server RAM Usage"
set key reverse Left outside
set grid

set style data linespoints

#plot "tail.log" using 1:2, "tail.log" using 1:4
plot "tail.log" using 1:6



15May/110

apachedump.pl

#!/usr/bin/perl
#
# apachedump.pl
# ver 0.1.0
# Eric Hernandez
# 20110516.1221
#
# Will dump out VirtualHosts found in the Apache httpd.conf file
# as well as looking for Included files and the VirtualHost blocks
# within them.
#

use File::Basename;

my $count = 1;
my $show_includes = 0;
my $show_listen = 0;

# GET THE LAST ARGUMENT AS FILENAME
my $conf_file = $ARGV[$#ARGV];
open (FILE, $conf_file);

# LOAD httpd.conf FILE IN TO @ARRAY
@file_line = <FILE>;
close (FILE);

# START SCRIPT BY PRINTING BANNER AND LOG FILE
print "apachedump.pl\n\n";
#print "Main File: " . $conf_file . "\n";
# CALL TO MAIN FUNCTION
master(@file_line);

#
# SUBROUTINES
#
sub master {

my @line = @_;
my $in_vhost_flag = 0;

for (my $i = 0; $i <= $#line; $i++) {
if ($line[$i] =~ /^Include/) {
# IF AN Include DIRECTIVE IS FOUND and THEN WE NEED TO OPEN THAT FILE
@tmp = split(/ /, $line[$i]);
my $name = "/etc/httpd/" . $tmp[1];
include_file($name);
}
if ($line[$i] =~ /^Listen/ && $show_listen == 1) {
print $line[$i] . "\n";
}
if ($line[$i] =~ /^<VirtualHost/) {
print "(" . $count . ")" . " Virtual Host block found";
my $tmp = $line[$i];
$tmp =~ s/\<//;
$tmp =~ s/\>//;
$tmp =~ s/VirtualHost//;
$tmp =~ s/ //;
chomp $tmp;
print " listening on " . $tmp . "\n";

$in_vhost_flag = 1;
}
if ($line[$i] !~ /^<\/VirtualHost>/ && $in_vhost_flag == 1) {
# HERE WE ARE IN THE VHost BLOCK
# print $line[$i];
if ($line[$i] =~ /[^#](DocumentRoot|ServerName|ServerAlias)/) {
print $line[$i];
}
}
if ($line[$i] =~ /^<\/VirtualHost/) {
# print $line[$i];
$in_vhost_flag = 0;
print "\n";
$count = $count + 1;
}
}

}

sub include_file() {
my $name = $_[0];

# GET FILES IN THE INCLUDED DIRECTORY and SEND TO main() FUNCTION
chomp $name;

opendir (DIR, "/etc/httpd/conf.d/") or die $!;

my $base = basename($name);

$base =~ s/\./\\\./; # TO CHANGE "." TO "\."
$base =~ s/\*/\.\*/; # TO CHANGE "*" TO ".*"
$base = $base . "\$"; # now should be ".*\.conf$"

while (my $file = readdir(DIR)) {
if ($file =~ /$base/) {
my $file = dirname($name) . "/" . $file;
chomp $file;

if ($show_includes == 1) {
print "\nIncluded File: " . $file . "\n";
}

open (INC_FILE, $file) or die "Can't open $f : $!";;
my @include = <INC_FILE>;
close(INC_FILE);
master(@include);
}
}

}
Tagged as: , , No Comments
15May/110

rsync

To sync contents of two directories using rsync.

 

Access via remote shell:

Pull: rsync [OPTION...] [USER@]HOST:SRC... [DEST]

Push: rsync [OPTION...] SRC... [USER@]HOST:DEST

 

-v, --verbose increase verbosity

-a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)

-u, --update skip files that are newer on the receiver

--existing skip creating new files on receiver

--ignore-existing skip updating files that exist on receiver

-z, --compress compress file data during the transfer

 

 

 

SYNC: pull, then push data

 

PULL: rsync -avz --ignore-existing test1@empulsegroup.com:/home/test1/Documents .

PUSH: rsync -avz Documents test1@empulsegroup.com:/home/test1

 

15May/110

using ‘diff’ to compare files or directories

Find differences between two files or directories.

 

 

# diff sample1 sample2

2,3c2,3

< sample text. I

< will not forget

---

> EXTREMELY sample text. I

> will not EVER WANT

4a5

> OR BE BAD

6d6

< Good bye,

 

 

# cat sample1

This is my

sample text. I

will not forget

to write another message,

for my friends.

Good bye,

and thanks for all the fish!

 

 

# cat sample2

This is my

EXTREMELY sample text. I

will not EVER WANT

to write another message,

OR BE BAD

for my friends.

and thanks for all the fish!

 

 

Explanation:

 

(line number of range) (c, a, or d) (line number of range)

 

'c' - stands for a change in the line

'a' - stands for append after the line

'd' - stands for delete the line

 

 

:: For side by side comparison:

 

# diff -y file1 file2

 

'|' - stands for a change between the lines

'>' - stands for an addition of text from file2 that was not in file1

'<' - stands for a deletion of text from file1 since not found in file2

 

 

:: To create a patch file from the diff:

 

# diff file1 file2 > patch

 

 

:: To patch the original file:

 

# patch file1 -i patch -o updatedfile

 

# if diff sample1 sample1; then echo "files are the same"; else echo "files are DIFFERENT"; fi

files are the same

 

 

:: To compare directories:

 

# ls tmp1

sample1 sample2

 

# ls tmp2

file1 file2 sample1

 

# diff tmp1 tmp2

Only in tmp2: file1

Only in tmp2: file2

Only in tmp1: sample2

8May/110

qmtracker.pl a qmail message tracker

#!/usr/bin/perl
##########################################################################################
#
# qmtracker.pl
# Eric Hernandez &lt;easyjeezy33@empulsegroup.com&gt; 2011-04-03
#
# Does a number of things on a qmail maillog file. Written for Plesk servers.
#
# Will search and report all messages and their status, or single message search.
#
# Usage:
# qmtracker.pl [-f|-t] ADDY MAILLOG
# qmtracker.pl -m ID MAILLOG
# qmtracker.pl MAILLOG
#
# Options:
# -a : will dump all messages found based on from=
# -f : will search for messages from=
# -t : will search for messages to=
# -m : search by msg id
# ADDY : can be a full or partial email address, eg. user or domain
# MAILLOG : is the path to the mail log file, written for qmail
#
#
# NOTES/CONCERNS:
# 1. need to go back and take in to account larger ID numbers,
# eg. qmail-queue-handlers and submitter IDs
#
# 2. only looks at "maillog" in local directory, need to take in arguments
#
# 3. the longer failure notices in delivery status() function
#
# 4. memory issues? what if the mail log is too large?
#
##########################################################################################

# if no command line arguments, then print help
# no need to do anything before this
if ($#ARGV == -1) {
help();
}

# declare arrays
# maybe messages array should be a hash?
# array in @messages will be of form:
# addy partner_addy d d d sub_ID msg_ID del_ID del_stat
@messages = ();
@msg_IDs;
$version = "0.8";

# get log file location, should always be last command line argument
$log_file = $ARGV[$#ARGV];
open (MAIL, $log_file);

# LOAD MAILLOG FILE IN TO @ARRAY
my @line = &lt;MAIL&gt;;
close (MAIL);

# get command line arguments
## &lt;-a&gt;
if ($ARGV[0] =~ "-a") { message_splat(); }
## &lt;-m&gt;
if ($ARGV[0] =~ "-m") { trim_the_fat("from="); find_msg($ARGV[1]); }
## &lt;-f/-t&gt; &lt;USER&gt; &lt;MAILLOG&gt;
if ($#ARGV == 2) { trim_the_fat($ARGV[0], $ARGV[1]); }
## &lt;USER&gt; &lt;MAILLOG&gt;
if ($#ARGV == 1) { trim_the_fat($ARGV[0]); }
## &lt;MAILLOG&gt;
if ($#ARGV == 0) { trim_the_fat("from="); }

# FINALLY THE REPORT
# trim_the_fat() subroutine will gather messages based on search criteria
# unique_msg_IDs() and unique_messages() will print out just that
print "---UNIQUE MESSAGE IDs---\n";
unique_msg_IDs();

print "\n---UNIQUE MESSAGES---\n";
unique_messages();

exit;
# END PROGRAM

###########################################################################################
#####################################
#######################
############
#####
###
### SUBROUTINES
###
sub trim_the_fat() {
# get arguments for searching
# options: from=/to=, address
# default: from=
my $search_to_or_from = "from=";
my $search_addy;
my $num_args = $#_;
$num_args++;

# PROCESS ARGUMENTS
# if passed with the flags [-f/-t] and search address
if ($num_args == 2) {
if ($_[0] eq "-f") {
$search_to_or_from = "from=";
$search_addy = $_[1];
}
else {
$search_to_or_from = "to=";
$search_addy = $_[1];
}
}

# STILL PROCESSING ARGUMENTS
# if passed with just the search address
if ($num_args == 1) {
$search_to_or_from = "from=";
$search_addy = $_[0];
}

## FINALLY, THE SEARCH
for (my $i = 0; $i &lt;= $#line; $i++) {
if ($line[$i] =~ /qmail-queue-handlers/) {
if ($num_args == 2 || $num_args == 1) {
if ($line[$i] =~ /$search_to_or_from/ &amp;&amp; $line[$i] =~ /$search_addy/) {
process_line($line[$i], $search_to_or_from);
}
}
else {
if ($line[$i] =~ /$search_to_or_from/) {
process_line($line[$i], $search_to_or_from);
}
}

}
}
}

sub message_splat() {
for (my $i = 0; $i &lt;= $#line; $i++) {
if ($line[$i] =~ /qmail-queue-handlers/) {
if ($line[$i] =~ /from=/ || $line[$i] =~ /to=/) {
chomp $line[$i];
my @x = split (/\s+/, $line[$i]);
my $y = $x[4];
chomp $y;
my $z = trim_qmail_queue_handlers_ID($y);
print "$x[5] on $x[0] $x[1] $x[2]\n";
print "with qqh ID: $z\n";
my $sub_ID = get_submitter_ID_from_qqh_ID($z);
print "with submitter ID: $sub_ID\n";
my $msg_ID = get_msg_ID_from_submitter_ID($sub_ID);
print "with msg ID: $msg_ID\n";
my $del_ID = get_delivery_ID_from_msg_ID($msg_ID);
print "with delivery ID: $del_ID\n";
my $del_stat = get_delivery_status_from_delivery_ID($del_ID);
print "with status: $del_stat\n";
}

}
}
}

sub find_msg() {
for(my $i = 0; $i &lt;= $#messages; $i++) {
if ($messages[$i][4] =~ $_[0]) {
for(my $j = 0; $j &lt;= $#{$messages[$i]}; $j++) {
if($j == 3) { print "Submitter ID: "; }
if($j == 4) { print "Message ID: "; }
if($j == 5) { print "Delivery ID: "; }
if($j == 6) { print "Initial Delivery Status: "; }
print "$messages[$i][$j]\n";
}
}
}
exit;
}

##yanked from trim_the_fat()
sub process_line() {
my $search_pref = $_[1];

my $line = $_[0];
chomp $line;
my @x = split (/\s+/, $line);
my $y = $x[4];
chomp $y;

my $qqh_ID = trim_qmail_queue_handlers_ID($y);
my $sub_ID = get_submitter_ID_from_qqh_ID($qqh_ID);
my $msg_ID = get_msg_ID_from_submitter_ID($sub_ID);
my $del_ID = get_delivery_ID_from_msg_ID($msg_ID);
my $del_stat = get_delivery_status_from_delivery_ID($del_ID);
my $msg_date = "$x[0] $x[1] $x[2]";
my $partner_addy = get_partner_address_by_qqh_ID($qqh_ID, $search_pref);

my @msg = ($x[5], $partner_addy, $msg_date, $sub_ID, $msg_ID, $del_ID, $del_stat);

if(msg_exists($msg_ID)) {
# just add the delivery attempt date to @messages array
push_delivery_attempt_on_messages_array($msg_ID, $msg_date);
}
else {
# else, add the whole message to array
push(@messages, [@msg]);
}
}

# will find the partner address based on
# ARGUMENTS: &lt;address&gt; AND &lt;to=/from=&gt;
sub get_partner_address_by_qqh_ID() {
my $search_addy = $_[0];
my $search_pref = $_[1];

if ($search_pref eq "from=") {
$search_pref = "to=";
}
else {
$search_pref = "from=";
}

for (my $i = 0; $i &lt;= $#line; $i++) {
if ($line[$i] =~ /qmail-queue-handlers.*.$search_addy/ &amp;&amp; $line[$i] =~ /$search_pref/) {
chomp $line[$i];
my @x = split (/\s+/, $line[$i]);
my $y = $x[5];
chomp $y;
chop $y;
return $y;
}
}
}

# print out unique messages and their data
sub unique_messages() {
for(my $i = 0; $i &lt;= $#messages; $i++) {
print "\n";
for(my $j = 0; $j &lt;= $#{$messages[$i]}; $j++) {
if($j == 3) { print "Submitter ID: "; }
if($j == 4) { print "Message ID: "; }
if($j == 5) { print "Delivery ID: "; }
if($j == 6) { print "Initial Delivery Status: "; }
print "$messages[$i][$j]\n";
}
}
}

# print out only unique msg IDs
sub unique_msg_IDs() {
for(my $i = 0; $i &lt;= $#msg_IDs; $i++) {
print "$msg_IDs[$i]\n";
}
}

sub push_delivery_attempt_on_messages_array() {
for(my $i = 0; $i &lt;= $#messages; $i++) {
if ($messages[$i][4] =~ $_[0]) {
push @{ $messages[$i] } , $_[1];
}
}
}

# checks if the msg id exists yet
# push or pop messages from stack
sub msg_exists() {
my $x = $_[0];
if (grep /$x/, @msg_IDs) {
return 1;
}
else {
push(@msg_IDs, $x);
return 0;
}
}

#
# start with the form "qmail-queue-handlers[8322]:"
# we return "8322" = qqh ID
#
sub trim_qmail_queue_handlers_ID() {
my $x = $_[0];
chomp $x;
chop $x;
chop $x;
my $y = substr($x, 21);
return $y;
}

sub get_submitter_ID_from_qqh_ID() {
for (my $i = 0; $i &lt;= $#line; $i++) {
if ($line[$i] =~ /qmail-queue-handlers.$_[0]/ &amp;&amp; $line[$i] =~ /submitter/) { ## CHECK THIS! NOT SURE!
chomp $line[$i];
my @x = split (/\s+/, $line[$i]);
my $y = $x[6];
chomp $y;
$z = trim_submitter_ID($y);
return $z;
}
}
}

#
# start with the form "submitter[8323]"
# this returns "8323" = submitter ID
#
sub trim_submitter_ID() {
my $x = $_[0];
chomp $x;
chop $x;
my $y = substr($x, 10);
return $y
}

sub get_msg_ID_from_submitter_ID() {
for (my $i = 0; $i &lt;= $#line; $i++) {
if ($line[$i] =~ /info.*msg/ &amp;&amp; $line[$i] =~ /qp.*.$_[0]/) {
chomp $line[$i];
my @x = split (/\s+/, $line[$i]);
my $y = $x[8];
chomp $y;
chop $y;
return $y;
}
}
}

sub get_delivery_ID_from_msg_ID() {
for (my $i = 0; $i &lt;= $#line; $i++) {
if ($line[$i] =~ /starting.*delivery/ &amp;&amp; $line[$i] =~ /msg.*.$_[0]/) {
chomp $line[$i];
my @x = split (/\s+/, $line[$i]);
my $y = $x[8];
chomp $y;
chop $y;
return $y;
}
}
}

sub get_delivery_status_from_delivery_ID() {
for (my $i = 0; $i &lt;= $#line; $i++) {
if ($line[$i] =~ /delivery.*.$x_[0]/ &amp;&amp; $line[$i] !~ /starting.*delivery/) {
chomp $line[$i];
my @x = split (/\s+/, $line[$i]);
my $y = $x[8].$x[9];
return $y;
}
}
}

sub help() {
print "qmtracker.pl v$version\n";
print "2011 Eric Hernandez\n";
print "\n Will search and report all messages and their status, or single message search.\n";
print "\nUsage:\n\tqmtracker.pl [-f|-t] ADDY MAILLOG\n";
print "\tqmtracker.pl -m ID MAILLOG\n";
print "\tqmtracker.pl -a MAILLOG\n";
print "\tqmtracker.pl MAILLOG\n";
print "\nOptions:\n";
print "\t-a\t\t: will dump all messages found based on from=\n";
print "\t-f\t\t: will search for messages from=\n";
print "\t-t\t\t: will search for messages to=\n";
print "\t-m\t\t: search by msg id\n";
print "\tADDY\t\t: can be a full or partial email address, eg. user or domain\n"; ####
print "\tMAILLOG\t\t: is the path to the mail log file, written for qmail\n"; ############
exit; ####################
} ############################
############################################
####################################################################################################################
#
# NOTES:
#
# qmail-queue-handlers()
# 1| Dec 29 23:45:45 201399-plesk-64 qmail-queue-handlers[8322]: from=anonymous@201399-plesk-64.kickstart.rackspace.com
# 2| Dec 29 23:45:45 201399-plesk-64 qmail-queue-handlers[8322]: to=root@201399-plesk-64.kickstart.rackspace.com
# 3| Dec 29 23:45:45 201399-plesk-64 qmail-queue-handlers[8322]: starter: submitter[8323] exited normally
#
# msg()
# 4| Dec 29 23:45:45 201399-plesk-64 qmail: 1262151945.557099 new msg 591067
# 5| Dec 29 23:45:45 201399-plesk-64 qmail: 1262151945.558336 info msg 591067: bytes 630 from &lt;anonymous@201399-plesk-64.kickstart.rackspace.com&gt; qp 8323 uid 0
#
# delivery()
# 6| Dec 29 23:45:45 201399-plesk-64 qmail: 1262151945.603234 starting delivery 1: msg 591067 to remote root@201399-plesk-64.kickstart.rackspace.com
# 7| Dec 29 23:45:45 201399-plesk-64 qmail: 1262151945.687495 delivery 1: failure: Sorry,_I_couldn't_find_any_host_named_201399-plesk-64.kickstart.rackspace.com._(#5.1.2)/
#
#
# logic flow:
# qmail-queue-handlers[8322]: &gt;&gt;&gt;&gt; submitter[8323] = qp 8323 &gt;&gt;&gt;&gt; msg 591067 &gt;&gt;&gt;&gt; delivery
#
8May/110

RHCS: Setting up MySQL and NFS cluster



Red Hat Cluster Suite

Setting up MySQL and NFS cluster

http://www.redhat.com/cluster_suite/

This document is meant to be a guide to setting up MySQL and NFS cluster services with Red Hat Cluster Suite. A training environment is provided at training.racktools.us.

Axios Articles:

Configure Hostnames and Network:

Note that you will give up DRAC access to the cluster in our setups for fencing. Nic bonding is only for redundancy of the interfaces. In our VM training we do not currently have a way to fence devices the way DRAC would offer.


  • hostname server1.domain.com


  • /etc/sysconfig/network


  • /etc/hosts

Configure SAN:

Usually you will have SAN LUNs for NFS and MySQL services. In this training you will use devices /dev/sdb and /dev/sdc for storage.


  • create new partition with fdisk


  • refresh partition tables on both servers: partprobe


  • format to ext3 with mkfs.ext3

  • turn off fsck schedule with: tune2fs -c 0 -i 0d /dev/sdb1


Install software:


  • yum install cman rgmanager system-config-cluster fontconfig xorg-x11-fonts-Type1 xorg-x11-xauth perl-Crypt-SSLeay

Configure locations:


  • mkdir -p /san/mysql-fs
  • mkdir -p /san/nfs-fs

MySQL:

At this point we want to get MySQL running on the SAN mount, or /dev/sd{b,c} partition in this case, and create symlinks from the original directory location to the mount point.


  • Move /var/lib/{mysql,mysqllogs,mysqltmp} from one server to
    the SAN partition


  • Move /etc/my.cnf from one server to the SAN

  • symlink directories and my.cnf fron SAN to original locations
    on BOTH servers: ln -s /san/mysql-fs/mysql /var/lib/mysql


NFS:

Portmap and NFS services need to be running on each node in order for NFS cluster services to start.


  • service portmap start; chkconfig portmap on;
  • service nfs start; chkconfig nfs on;
  • echo "portmap: 10.0.0.0/255.0.0.0" >> /etc/hosts.allow

System-config-cluster:

You will create the cluster.conf file with the GUI tool 'system-config-cluster'. To enable X11 forwarding SSH to the server with the "-Y" or "-X" option.

# system-config-cluster

You will first be asked to name the cluster.

Now, with an empty configuration, you can start by adding your cluster nodes based on hostnames.

Cluster Nodes:

Click on the Cluster Nodes heading and then click the button "+Add a Cluster Node". Enter server name and set Quorum Votes to '1'. Do this for each node.

Fence Devices:

Now set up fencing for each server. Fencing is the disconnection of a node from shared storage. A fence device is a hardware device that can be used to cut a node off from shared storage. In our case we use DRAC as our fencing agent.

Source: https://access.redhat.com/kb/docs/DOC-30004

Click on the Fence Devices section and then click the button "+Add a fence device".

From the drop down list, select DRAC. The login details for DRAC in our environment are on the training page. In our Rackspace configs you would use the normal DRAC credentials.

Now, with the fencing deivces entered you need to set up fencing on each cluster node.

Under the Cluster Nodes section, highlight the first cluster node and then click the button "Managed Fencing For This Node".

Highlight the cluster node name and click the button "+Add a New Fence Level".

Now, highlight the new Fence-Level-1 and click the button "+Add a New Fence to this Level". Here you select the respective fencing device.

Managed Resources

Under the Resources section you will create resources for both the MySQL and NFS clusters. These resources are ip address, file systems, MySQL conf file, and NFS export and client settings.

Failover Domains

Set up a failover domain for each cluster service, MySQL and NFS.

If you want each server to be responsible for a particular service check "Prioritized List" and adjust priority of the cluster nodes inversely between failover domains.

Resources:

First, set up the ip address for the cluster services. We will setup the MySQL cluster first.

Set up a resource for the MySQL file system. In our environment we are using /dev/sdb and /dev/sdc disks, but in our Rackspace cluster configs we would usually SAN luns presented as /dev/emcpowerb, etc.

Set up the MySQL configuration file, /etc/my.cnf, which will be symlinked on each server to the SAN mount or /dev/sd{b,c} in our environment.

Set up the ip address for NFS cluster.

Set up the NFS file system.

Now, the NFS export

The last resource will be the NFS client. Target will be the network you want to allow. Path is not optional at all and needs to be set. You can create more NFS client resources for each network you with.

Services:

Now that we have all of our resources set we need to chain them together to make each cluster service.

Under the Failover Domain drop down select the respective failover domain.


  • click "+Add a Shared Resource to this service" and select the ip address for the MySQL cluster service.
  • highlight the ip address you just added and click "+Add a Shared Resource to the selection" and select the MySQL file system.
  • highlight the file system resource, click "+Add a Shared Resource to the selection" and select the MySQL server.

The final cluster config tree should look similar to this:

Start services:


  • for i in cman rgmanager; do service $i start; chkconfig $i on; done

Commands:


  • 'clustat' ~= Will show the status of the cluster
  • 'clusvcadm -R mysql-svc' ~= Will restart MySQL in place on the same server
  • 'clusvcadm -r mysql-svc -m ' ~= Will relocate MySQL to that node
  • 'clusvcadm -d mysql-svc' ~= Will disable MySQL
  • 'clusvcadm -e mysql-svc' ~= Will enable MySQL
  • Note: Cluster messages are logged to /var/log/messages.


Filed under: Linux No Comments