Archive for the ‘Tips & Tricks’ Category

How to debug a segmentation fault caused by PHP

December 1st, 2009 4 comments

A segmentation fault can have many causes, the best thing to do when you have a segmentation fault is debug it to find out what’s causing it.

I explain to ways two do this.

With The GNU Debugger

  • First install the gdb package with apt-get or yum.
  • Second, stop all httpd processes
  • Start Apache in debug mode


httpd -X


apache2 -X
  • Find the parent process id from Apache


cat /var/run/


cat /var/run/
  • Start the gdb program
  • Connect the GNU debugger to the Apache process
attach <apache process id>
  • The debugger will halt the process, but we want to run it till the segmentation fault has happened
  • Now try to reproduce the segmentation fault. When this happens you will see the fault in the debugger.
  • To see what happened, get the backtrace

With Valgrind

  • Install valgrind with yum or apt-get
  • Stop all Apache processes
  • Start valgrind with Apache in debug mode


valgrind /usr/sbin/httpd -X


valgrind /usr/sbin/apache2 -X
  • Try to reproduce the segmentation fault and valgrind shows what happened

Redirect output to file

March 9th, 2009 No comments

Normally when you execute a command in your shell you’ll get the output direct on your screen. But it’s also possible to redirect this output to a file, for example for logging purposes.


php myscript.php > mylog.log

Now all output from myscript.php will go in the mylog.log file. This is called the “standard output” (stdout). But when a PHP error occurs it will not be written to the mylog.log file. Instead it will be printed on your screen. This is the “error output” (stderr), and to write this to mylog.log you have to use this:

php myscript 2>&1 mylog.log

This will send all output to mylog.log including the errors. It’s also possible to only write the error ouput to a log file.

php myscript 2> mylog.log

Combine this with ‘running processes as background jobs ‘ and you can run your scripts/command in the background but still be able to watch the progress in the log file.

php myscript 2>&1 mylog.log &

And now watch your log with

tail -f mylog.log

You will see new lines being written to the log in real time!

These different outputs are streams, read more info about this here.

MySQL profiler

March 4th, 2009 1 comment

When your query is slow you can debug your query with EXPLAIN. But did you know that since MySQL 5.0 you can use a profiler in MySQL?
It’s easy to use and can help you to find the bottleneck in your query.

How it works

Open a MySQL console and turn on the profiler:

mysql> set profiling=1;
Query OK, 0 rows affected (0.00 sec)

Then execute the query you want to profile, you can execute as many queries as you want. But by default only the last 10 will show up in the profiler.

mysql> select count(1) from files;
| count(1) |
|    13631 |
1 row in set (0.09 sec)

Now check what’s the query ID with the command:

mysql> show profiles;
| Query_ID | Duration   | Query                      |
|        1 | 0.09114000 | select count(1) from files |
1 row in set (0.00 sec)

To get a the details of this query use the query id.

mysql> show profile for query 1;
| Status               | Duration  |
| (initialization)     | 0.000003  |
| checking permissions | 0.000031  |
| Opening tables       | 0.000028  |
| System lock          | 0.000018  |
| Table lock           | 0.000009  |
| init                 | 0.00002   |
| optimizing           | 0.000008  |
| statistics           | 0.000023  |
| preparing            | 0.000012  |
| executing            | 0.000008  |
| Sending data         | 0.0909409 |
| end                  | 0.000013  |
| query end            | 0.000005  |
| freeing items        | 0.00001   |
| closing tables       | 0.000008  |
| logging slow query   | 0.000003  |
16 rows in set (0.01 sec)

Now you see the duration of every step MySQL takes. You can see a lot more like the CPU usage, just take a look at the manual.

Running processes as background jobs

March 2nd, 2009 1 comment

On a linux terminal you can simply run a process in the foreground like it normally does. But if you close your terminal session the process will stop running. If you run a large database import you don’t want to wait for it to finish, so you want to run it in the background.

To do this you can simply put a ‘&‘ behind the command you want to execute, for example:

mysql -u root -p < import.sql &

But it’s also possible to put a process to the background when it’s already running. You can do this with the keyboard shortcut ctrl + z. Then you ‘stop’ the process. You should see a message like this:

[1]+  Stopped                 mysql

Now you can choose two things, one is to run it in the foreground, the other is to run it in the background.
Just type bg <enter> and the process will continue running in the background. With fg <enter> it will run in the foreground.

You can see what processes are running in the back- or foreground with the command jobs.

Extend LVM with new physical disk

March 2nd, 2009 1 comment

LVM is a great tool for volume management. You can easily add a new (virtual) harddisk to a existing logical volume.
With fdisk -l you can see what devices are available. Choose the right one and execute:

fdisk /dev/sdb

Create a new partition which fills up all the free space with the type Linux LVM (HEX: 8e).
Then we create a physical volume for LVM:

lvm> pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created

After that we can extend the volume group with this new physical volume:

lvm> vgextend VolGroup00 /dev/sdb1
Volume group "VolGroup00" successfully extended

Then extend the existing logical volume with 100% of the free space from the new physical volume:

lvm> lvextend -l +100%FREE /dev/VolGroup00/os
Extending logical volume os to 27.84 GB
Logical volume os successfully resized

That’s all, the existing logical volume is now extended with the new harddisk.
If you have a ext3 filesystem on this logical volume you can extend this online using the command:

[[email protected] ~]# resize2fs /dev/VolGroup00/os
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VolGroup00/os is mounted on /; on-line resizing required
Performing an on-line resize of /dev/VolGroup00/os to 7299072 (4k) blocks.
The filesystem on /dev/VolGroup00/os is now 7299072 blocks long.

Like I said before, this can be done on a online root filesystem without any problems. I’ve done it again this time. But always make sure to have a backup!

Twitter integration

February 28th, 2009 No comments

I’ve started using Twitter (sorry only in dutch), the nice thing about this service is the free API. Thanks to this API there are many ways to integrate twitter in your websites and your life.


To show your tweets on your website you can use the widgets from Twitter. Or you can use Twitter counter.


For using Twitter on your mobile or desktop check out this site for a large list of Twitter apps. I have a Windows mobile based phone (Samsung Omnia) and I use Tiny Twitter. And on my Ubuntu desktop I use gTwitter.


I have a Flickr account and want to connect this to Twitter. This way I can mail my photos from my mobile to Flickr and automatically post this on Twitter. I was searching for a service like this and found Snaptweet. This did indeed what I wanted, but it was taking 10-20 minutes to see the photo on Twitter. That’s to long for me.

Next service I found was Twittergram, but they have a problem with the connection to Twitter so I couldn’t test it.

Another option is to use the Flick RSS feed and use the Twitterfeed service.

For now I’m using Twitpic, they also offer a email address to send your pictures.

pgFouine PostgreSQL log analyzer

February 11th, 2009 No comments

If you have a busy PostgreSQL database you may want to know which query takes up most of the time of the postmaster. To create a nice overview of slowest and most frequent queries you can use pgFouine.
This PHP script can parse a PostgreSQL log and create different reports.

The command I usually use:

./pgfouine.php -from "- 1 week" -memorylimit 512 -file <logfile.log>
-logtype stderr -title 'PostgreSQL analyze' -report db-report.html=
n-mostfrequenterrors -format html-with-graphs

Resize and auto orientate pictures

February 9th, 2009 No comments

If you have a linux system with ImageMagick, you can easily resize and auto orientate pictures with a bash script.

Just create a file called with the following content:

list=`find $path*`

for file in $list; do
        echo $file
        convert -auto-orient -resize $size $file $file
exit 0

And give it execute rights.
When you call “./ 1024 /home/user/pictures/” every picture in /home/user/pictures will be auto-orientated and scaled to a max height or width of 1024 pixels. The auto-orientate is done by looking at the exif info.

Developing on Ubuntu

February 9th, 2009 No comments

On my work I use a HP laptop with Ubuntu Intrepid. With Ubuntu and some nice applications you have a good platform for developing (web) applications.

These are the applications I use:

  • Firefox – Of course for surfing the internet, and with plugins like Firebug very useful for debugging javascript .
  • Thunderbird – My default mail client
  • Evolution – For access to my work calendar
  • KeePassX – For  storing passwords
  • Charles – A web debugging proxy
  • Geany – A fast text editor with a IDE
  • Meld Diff viewer – A diff and merge tool
  • RapidSVN – Front-end for subversion
  • Zend Studio – My most used IDE
  • Avidemux – A simple video editor with support for many codecs
  • Truecrypt – For securing important files
  • Dropbox – Used for the keepassx database, making it available on all my computers
  • VirtualBox – Running a Windows XP environment for testing with IE and using Visio (didn’t found a good alternative yet)

HP support pack on CentOS

February 7th, 2009 No comments

If you have CentOS running on a HP server you can install the HP support pack. On the HP site download the Redhat version.

First of all you need some rpm packages, this should be enough:

yum install rpm-build rpm-devel net-snmp glib kernel-devel
compat-libstdc++-296 make gcc

Than you have to edit the /etc/redhat-release file. First make a backup of the original file. Than place the following line in it:

Red Hat Enterprise Linux ES release 5

The version number must match the CentOS version, in my case this was CentOS 5.2.
After this you can easily start the installation by typing: ./install<versie>.sh -nui.