Saturday, June 29, 2013

A few Netapp dedup commands

I found these Netapp commands when I started digging into some dedup performance issues:


  • > sis start -s /vol/$volume    
  • > sis status   
    • Shows the status and a few settings of all of your volumes
  • > df -s /vol/$volume   
    • Shows some space savings
  • > sysstat 1
    • Shows you some basic performance stats updated once per second

Tuesday, June 4, 2013

Compare two files and show the additional lines in one that aren't in the other.

Quote often I have two sorted files and I want to "subtract" one from the other.  I've used a few different tools like perl, grep, and awk with success but they can be slow.

In typical linux fashion there is a tool that does exactly what I need very quickly called join.

Here is a quick example: join -1 3 -2 1 -v 1 < file1  < file2  > output

That is saying use column 3 on file1 ( -1 3 ), and column 1 on file2 ( -2 1 ).  It will then show only the lines that are in file1 but not file2 ( -v 1 ).

The files must already be sorted on those columns before you run join.

Sunday, June 2, 2013

Way to create an rpm

I ran across a tool that looks very promising to turn a directory of files into an rpm.

It is called fpm and I'm starting to look into it now.

Wednesday, January 16, 2013

Running Parallel Commands on a group of Linux boxes

I have been using a program called cssh for a while now and really appreciated the features of it.  Every once in a while I wanted to run commands from the command line and not spawn a bunch of new windows.  I recently ran across polysh which works very nicely and does exactly what I wanted.

Wednesday, September 12, 2012

What is taking up all this space?

We had a filesystem that kept growing in size even though the user was deleting the log files. du and df didn't match up and it was starting to be a problem. We restarted one of the apps and got back a little space so I figured there were still open file handles holding on to the space. 

>>lsof | grep deleted

 That showed me the app that was holding on to the space.

>>lsof | grep deleted | awk '{print $1, $2, $7, $9}'

 Then on the PID of the app I ran:

 >>kill -HUP 8787

 Presto, that freed up all the disk space.

Tuesday, December 13, 2011

Issue starting a VM on Citrix Xen after an outage

After a recent problem with our Citrix Xen cluster we had one Windows VM that refused to start up:

[root@XENSERVER1 ~]# xe vm-start uuid=4dc9528c-0eb7-38fc-7246-66b387d6aa0e
Error code: SR_BACKEND_FAILURE_46
Error parameters: , The VDI is not available [opterr=VDI aae84c26-4520-45cd-ad66-5e379874f5dd already attached RW],

After a lot of poking and prodding the VM's disks seemed fine.
I tried to export the VM and received this message:

[root@XENSERVER1 ~]# xe vm-export uuid=4dc9528c-0eb7-38fc-7246-66b387d6aa0e filename=/var/run/sr-mount/3184f86f-9743-b819-5cd5-e84ccf7e7c6c/win-server-export.vdi
The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem.
message: Failure("The VDI aae84c26-4520-45cd-ad66-5e379874f5dd is already attached in RW mode; it can't be attached in RO mode!")

That got me on the right track and I did this:

[root@XENSERVER1 ~]# xe-toolstack-restart
Stopping xapi: .. [ OK ]
Stopping the v6 licensing daemon: [ OK ]
Stopping the memory ballooning daemon: [ OK ]
Stopping perfmon: [ OK ]
Stopping the fork/exec daemon: [ OK ]
Stopping the multipath alerting daemon: [ OK ]
Starting the multipath alerting daemon: [ OK ]
Starting the fork/exec daemon: [ OK ]
Starting perfmon: [ OK ]
Starting the memory ballooning daemon: [ OK ]
Starting the v6 licensing daemon: [ OK ]
Starting xapi: ....start-of-day complete. [ OK ]
done.

And presto the VM booted:

[root@XENSERVER1 ~]# !250
xe vm-start uuid=4dc9528c-0eb7-38fc-7246-66b387d6aa0e

The problem was that XENSERVER1 was the only one that was not rebooted during the outage since it was the master.

Tuesday, December 6, 2011

Simple log/file rotation

I wanted a nice way to do an hourly backup of a mysql database and only keep a days worth.

In the past I have written scripts that would check how many files were there or used find to delete
old only.

I wondered if I could use logrotate to handle this since it is installed on most systems and is simple to use.
Unfortunately it doesn't understand the term hourly and only does daily and monthly.

I wrote my backup script and created a simple logrotate.conf file but left out the section that said
how often to run.

[root@server1 bin]# cat backup-www-db.logrotate
/var/database_backup/mysql_dump.sql {
# hourly
rotate 24
compress
delaycompress
missingok
create 640 root adm
}

Then I just told the backup script to run logrotate as the first thing it does giving it that config file:
/usr/sbin/logrotate /usr/adm/bin/backup-www-db.logrotate

It rotates the previous database dump and compresses them.

It only runs when the script runs so I set cron to run every hour and we have a nice simple solution.