Hazard's stuff

Huawei OPS syslog example

— Posted by hazard @ 2023-06-29 10:54
Documentation from the Huawei site gives non-working examples and wrong names of parameters for OPS syslog logging. Below are syslog calls that actually work:
import ops

def ops_condition(_ops):
    status, err_log = _ops.syslog("Hello world", ops.INFORMATIONAL, ops.SYSLOG)
    status, err_log = _ops.syslog("Example critical error", ops.CRITICAL, ops.SYSLOG)


Example Huawei OPS route monitoring script that changes VXLAN VTEP configuration if route goes away

— Posted by hazard @ 2023-06-17 23:42
To avoid using peer-link between Huawei CloudEngine switches in PtP VXLAN environment and therefore save 4 x 100G ports, I made a Python script for Huawei OPS that changes VTEP peer IP in case route to primary VTEP disappears (e.g. primary switch fails). It was much more of an effort than it should been, due to inadequate API documentation & examples which sometimes specify wrong parameter values. Non-working Python error reporting on Huawei VRP (at least on OS release I used) didn't help either. Details below. (More)

static bearssl https client

— Posted by hazard @ 2021-12-28 21:26
From time to time I encounter a Linux system which doesn't have a proper SSL library or doesn't have one at all (e.g. embedded). For such cases I made an extremely simplistic https client using BearSSL library that can be compiled statically with appropriate libc. It is a quick hack based on client_basic.c and may not work properly in cases of unusual server responses. Example command line:
./https_client en.wikipedia.org 443 /wiki/Linux

The client returns response headers as STDERR, body as STDOUT and exit status 0 if HTTP response status is 200. To compile, untar into a folder containing compiled BearSSL library, cd https_client && ./make.sh

ISC dhcpd doesn't work for unicast renewals without ARP

— Posted by hazard @ 2018-07-14 13:29
While implementing a network isolation policy I encountered an interesting quirk of how ISC dhcpd works: for unicast renewals, it uses a Linux IP-based syscall to send DHCPACK towards an IP address, not MAC. As a result, it means that an ARP entry should exist and therefore ARP towards the client host should work. For broadcast DHCP requests, DHCP replies directly to a MAC address and ARP is not needed. I expected that DHCP server wouldn't need anything apart from ports 67/68 and isolated everything else. So, I was seeing that some devices can get DHCP while some don't, which led to quite fun troubleshooting session, especially since dhcpd was logging a successful DHCPACK response - while tcpdump was showing that no response was actually sent through the wire.

Script to automate addition of self-signed SSL certificate to Git

— Posted by hazard @ 2017-04-24 10:06
Out of the box, Git doesn't recognize self-signed SSL https repository certificates typically used in internal networks and refuses to connect: "Peer's certificate issuer has been marked as not trusted by the user". A common method is to disable certificate check altogether, which opens up possibility of MITM. A more safe solution is to add SSL certificate of your internal repository to Git's config, so that it gets checked and recognized. This reduces your vulnerability window to the initial certificate download. I made a small shell script to automate the job: it downloads the SSL certificate and adds it to Git. Credit goes to ThorSummoner for the trick to fetch the cert using OpenSSL client.
#!/bin/sh
if [ ! "$1" ] ; then
    echo "Pass repository domain name as parameter (e.g. $0 git.local)"
    exit
fi
mkdir ~/.gitcert 2>/dev/null
true | openssl s_client -connect $1:443 2>/dev/null | 
       openssl x509 -in /dev/stdin > ~/.gitcert/$1.crt
git config --global http."https://$1".sslCAInfo ~/.gitcert/$1.crt


Smokeping Mikrotik SSH plugin with VRF support

— Posted by hazard @ 2015-07-13 00:54
I have released a Smokeping plugin for Mikrotik RouterOS devices. It supports VRFs and connects via SSH. Installation instructions:
  • Download the plugin to your server.
  • Install by copying the file into lib/Smokeping/probes directory under your smokeping installation (e.g. to /opt/smokeping/lib/Smokeping/probes).
  • You might also have to install Net::OpenSSH Perl module, if it's not already installed (check by running " perl -e 'use Net::OpenSSH' ").
  • Add the following section to your smokeping config:
    + OpenSSHMikrotikPing
    packetsize = [e.g. usual MTU is 1500]
    mikrotikuser = [user]
    mikrotikpass = [pass]
    # feel free to change params below as you wish
    forks = 5
    offset = 50%         
    timeout = 15
    step = 120
    
  • Individual targets are configured as follows:
    ++ sample-target
      probe = OpenSSHMikrotikPing
      menu = [menu name]
      title = [title]
      host = [destination IP to ping from Mikrotik device]
      pings = [numer of ICMP pings to send, e.g. 5]
      source = [Mikrotik device to login into]
      vrf = [routing-instance name, optional]
      psource = [Mikrotik interface source IP to ping from, optional]
    
  • ssh to the Mikrotik device once from the commmand line from the account of the user who is running smokeping (su -s /bin/sh [username]). On the first connect ssh will ask to add the new host to its known_hosts file, confirm it. Otherwise Smokeping will fail to login as the ssh key of your Mikrotik box is not in the known_hosts file.


bash binaries with ShellShock vulnaribility patch for old Red Hat Linux systems

— Posted by hazard @ 2014-09-25 19:06
** UPDATED SEP 26 2014 FOR CVE-2014-7169 **

Some of us are unlucky enough to run older Linux systems (CentOS 4 and older) and need to fix bash "ShellSock" environment code injection vulnerability.

To make the job easier, you can grab my CentOS 4.x i386 RPM/SRC RPM , as well as Red Hat Linux 6.2 (circa 2000!) RPM/SRC RPM, which may work on older systems as well, such as RH7. SRPM should be buildable on all Red Hat systems.

MD5 sums:
30d76eb29c75ca9bf5dcc4d4903de299  bash-3.0-29centos4_vulnfix.i386.rpm
89f0c72480a2dbe28d61503973e98443  bash-3.0-29centos4_vulnfix.src.rpm
57bb220cc9ac5ef2c445a4dece61814c  bash-3.0-29rh62_vulnfix.i386.rpm
4ab6aa1a5958da0e5290f43134b08f2a  bash-3.0-29rh62_vulnfix.src.rpm

If you want to be 100% sure that the code wasn't tampered with, build your own binary by using src RPM and verify that all patches apart from 140/141 are Red Hat original (140/141 were taken from Oracle's bash patches).

fix for Linux Skype 4.3 crash on startup

— Posted by hazard @ 2014-08-11 10:47
If you upgraded your Linux Skype to 4.3 and face a crash immediately after startup, the fix that worked for me is as follows:

  • Make of a backup of your home .Skype directory
  • Install sqlite package on your system if it isn't there already
  • Run: sqlite3 ~/.Skype/[YOURUSER}/main.db
  • DELETE FROM Messages WHERE type=68;
  • .quit
You will loose your file transfer history, but chat history will still be there. I found this workaround here. If it doesn't help, you may have to delete/rename your .Skype directory.


If you don't have in audio in Adobe Flash in Fedora 20, pulseaudio is the reason

— Posted by hazard @ 2014-06-27 19:15
If on your Linux PC YouTube wouldn't play more than one second of videos, while flash on other websites has no sound, or you have any other issues with audio in other apps, the likely culprit is Pulseaudio. I could never figure out the reason why the world needed pulseaudio, apart from the fact we have to use everything that Lennart Poettering creates.

Anyway, solution to the pain is easy, just follow excellent 30-second instruction on Mondo Grigio blog.

Sample jQuery.sheet online spreadsheet backend with load/save functionality

— Posted by hazard @ 2014-05-25 12:30
I've been integrating an online spreadsheet functionality into DokuWiki and jQuery.sheet looked like the most suitable candidate. It comes with a number of examples, but there is no server-side backend that will save/load the sheets. As a proof of concept I created a simple jQuery.sheet database backend example in Perl.
 (More)

Smokeping Juniper JunOS plugin with routing-instance and logical-system support

— Posted by hazard @ 2013-12-01 19:07
I have hacked together a Smokeping plugin for Juniper JunOS devices that supports VRFs (routing-instance) and logical systems.
  • Download the plugin to your server.
  • Install by copying the file into lib/Smokeping/probes directory under your smokeping installation (e.g. to /opt/smokeping/lib/Smokeping/probes).
  • You might also have to install Net::OpenSSH Perl module, if it's not already installed (check by running "perl -e 'use Net::OpenSSH'").
  • Add the following section to your smokeping config:
    + OpenSSHJunOSPing
    packetsize = [in JunOS 1472 is the max for 1500 L3 MTU]
    junospass = [pass]
    junosuser = [user]
    # feel free to change params below as you wish
    forks = 5
    offset = 50%         
    timeout = 15
    step = 120
    
  • Individual targets are configured as follows:
    ++ sample-target
      probe = OpenSSHJunOSPing
      menu = [menu name]
      title = [title]
      host = [destination IP to ping from JunOS device]
      pings = [numer of ICMP pings to send, e.g. 5]
      source = [JunOS device to login into]
      logicalsystem = [logical system name, optional]
      vrf = [routing-instance name, optional]
    
  • ssh to the JunOS device once from the commmand line from the account of the user who is running smokeping (su -s /bin/sh [username]). On the first connect ssh will ask to add the new host to its known_hosts file, confirm it. Otherwise Smokeping will fail to login as the ssh key of your JunOS box is not in the known_hosts file.


Forcing Fedora's preupgrade to use servers in Europe instead of Asia

— Posted by hazard @ 2012-01-07 19:23
I've decided to upgrade my FC14 to FC16. Along the way, I decided to do it using a method that I've never used before - preupgrade. Supposedly it's one of the easiest and less time consuming methods. Not in Cyprus ... (More)

nginx as protection against DDoS to Apache

— Posted by hazard @ 2011-08-28 18:33
A few days ago I was asked to help with a DDoS attack against a website. The DDoS itself was pretty generic, a small zombie network hammering particular URLs from the websites with GET requests. The websites were running on Apache, and despite that the target page was static, the DDoS was bringing Apache to its knees. System administrators tried to utilize various Apache modules and configuration tricks to protect against DDoS, but to no avail.

There was only one solution to this on my mind - install nginx. And that really helped. nginx is asynchronous by design and therefore handles load much much better. Whilst Apache was failing with several hundred simultaneous connections, nginx easily scaled to 10k caused by DDoS, whilst using only 20% CPU.

The first website was completely moved to nginx, with PHP being served through PHP/FastCGI. For the second website, the nginx was configured in proxy mode, so that it would forward all requests to the Apache, whilst enforcing limits against DDoS - 1 unique page request per IP per second, as well as blocking certain user agents. Below is an example configuration I created, relevant for CentOS/RHEL.
 (More)

Fixing Greenplum 'unresolved in-doubt transaction' errors

— Posted by hazard @ 2010-06-29 05:00
We had an issue with a database server running Greenplum (commercial Postgresql for large-scale datawarehousing). Greenplum was starting, but attempts to do anything in the database were resulting in the following errors:

INFO: Crash recovery broadcast of the distributed transaction 'Commit Prepared' broadcast succeeded for gid = 1265880453-0032866370.
INFO: Crash recovery broadcast of the distributed transaction 'Abort Prepared' broadcast succeeded for gid = 1265880453-0032866371��C
psql: FATAL: DTM Log recovery failed. There are still unresolved in-doubt transactions on some of the segment databaes that were not able to be resolved for an unknown reason. (cdbtm.c:2829)
DETAIL: Here is a list of in-doubt transactions in the system: List of In-doubt transactions remaining across the segdbs: ("1265880453-0032866371��C" , )
HINT: Try restarting the Greenplum Database array. If the problem persists an Administrator will need to resolve these transactions manually.

Of course, manuals/forums/Google did not provide any useful ideas as regards to how 'to resolve these transactions manually'. Morever there was no backup handy (and the db was huge). I didn't care about two lost transactions, I just wanted to start the database. After an hour of attempts, eventually I succeeded. The trick was to delete files from pg_twophase/ subdirectories.

I'm blogging this in the hope that when somebody else faces this problem, he would be able to find this post through Google, saving his nerves. :-)

Kernel 2.6.28 for Fedora Core 8

— Posted by hazard @ 2009-02-28 14:31
In case someone wants to run a recent Linux kernel on an FC8 box, I have made an RPM for 2.6.28.7 and you can download it here. Should also install on CentOS 5/RHEL 5 if you use --force.

You think that SPAM is distributed? I did so too.

— Posted by hazard @ 2008-11-14 14:31
Shutdown of a single ISP, McColo, has reduced world spam levels by 70%. Check the full story here. Amazing.

HP F4180 printer/scanner/copier

— Posted by hazard @ 2008-06-22 10:09
I recently purchased HP F4180 printer/scanner/copier for 50 EUR. It is amazing how inexpensive these things are nowadays.

As regards to Linux compatibility, I must say that it is very good and everything worked from the first attempt. Simply download HPLIP and all configuration is done automatically (at least on FC8). Printing works using CUPS, scanning using xsane. Thumbs up to HP!

IBM-Lenovo X61

— Posted by hazard @ 2008-03-15 09:05
My colleague received an IBM-Lenovo X61 laptop, and we found that built-in Intel 4965 wireless is very slow when connected to 802.11b access point. Same problem persisted both with Fedora Core 8 and Ubuntu (iwlwifi driver).

Usual tweaking of ACPI and APIC parameters didn't help. Googling around showed a lot of people suffering from the same problem with Intel 4965 wireless cards. Eventually I resolved the problem by removing iwlwifi driver files from /lib/modules, and installing ndiswrapper (allows to use Windows network drivers) + Intel 4965 driver for Windows XP.

The Vista Experience

— Posted by hazard @ 2008-03-10 07:02
My old desktop machine had died, so I decided to buy a new laptop to replace it. Even though most of the time it will be sitting in the same place, laptops nowadays are cheap and mobility is a nice option to have.

So, I got an HP Compaq 6710b. Along the way I purchased upgrade to 2 GB RAM and 320 GB hard drive. The laptop by default comes with Vista, which I wanted to keep, just in case I need to run some Windows stuff which wouldn't work in Linux. At first I start the laptop with its factory 160 GB HDD: HP's Vista installer loads from a special partition and in less than two hours I have a running Vista (enough to install Fedora two times). Then I proceed to make recovery DVD, so that I can install Vista on the 320 GB disk. Vista needs only two DVDs and another hour to do that; excellent. I replaced the hard drive and started the whole Vista installation process again from the DVD.

Of course, HP's Vista installer takes over entire disk space without asking, I'm sure only to make the experience more user-friendly for the user (after all, these "Advanced" buttons are way too tricky). The thing is, I want to give only 50 GB to Vista. Anyway, Vista gets installed, and I'm logged in. I start the Disk Management tool - actually, I already got irritated at this point because Microsoft thought it is a good idea to break old ways of using Windows - and whoala, it tells me that I can shrink my disk to 160 GB. To those of you who don't know, actual space used by Vista files is less than 20 GB.

The tool also vaguely mentions that I can get rid of shadow copies and paging file in order to increase available space. Of course it is useless to use built-in Help to find how do I actually do that, but thanks to Microsoft's competitor Google, this information was retrieved and necessary actions were performed. Now, I can shrink my drive by 3-4 GBs more. Fantastic! Just what I dreamed of, to buy 320 GB HDD and leave 155 GB for Vista. Googling around shows that what I got is normal for Windows disk shrinking tool - it frees about 50% of space.

"Screw it!", I think to myself, and proceed to install Fedora 8. As a precaution, I left first partition empty at 60 GB, so that I can try to install Windows again later (of course, I was also understanding that doing standalone Vista/XP install would be painful because it would not have the HP drivers). Anyway, in one hour FC8 is up and running - "nohz=off" was needed to make the Fedora installer work.

However, built-in modem is NOT up and running, as it is one of the softmodems for which Agere has not released drivers. I find a few suggestions on the mailing lists that making a Frankenstein driver by copying .o files from one driver to another might work, but I only get a few OOPS'es as a result and NOT a working modem.

Now, I need the modem, because I have to connect to remote console servers over the phone for troubleshooting of network outages. So, I insert Vista rescue DVD again, hoping that I might have missed an option to install into a specific partition. Nope, not there. And it also overwrote my MBR without asking. How nice.

I look back at my 160 GB drive. What if I shrink that one and then transplant Vista to my 320 GB HDD? In a few minutes my 160 GB is back in the laptop, I run Windows shrinking tool, and whoala, it shrinked it down to a whopping 45 GB. A few minutes more, and 160 GB is in USB enclosure, 320 GB is back inside the laptop, and dd is happily copying first partition from 160 GB drive to 320 GB one. One hour passes; dd has finished, not as quickly as I expected - only 7 MB/s. Anyway, let's try to boot Vista... drums roll (in my head)... Vista's loading bar starts to run around the screen... KABOOM, "winload.exe is missing or corrupted".

Back to Linux, mount the Vista partition. winload.exe is there and MD5 is the same as the original one on the 160 GB HDD. Also I can't find boot.ini anywhere. Hmmm. Time for another visit to Microsoft's competitor Google. Aha - Microsoft is improving at friendly error messages: "winload.exe is missing or corrupted" actually means that disk ID has changed, and just to make it easier for the average Joe to use Vista, Microsoft has started checking that disk ID entered into the bootloader config matches the one on the actual drive. Otherwise it won't boot, even if everything else is in place.

Alright, let's see how we can fix that bootloader config. Another innovation! Finally Microsoft has managed to get rid of that prehistoric way of configuring bootloader using a text boot.ini file! Now we have a shiny new registry-like binary database somewhere else. To edit it, use BCDEDIT.EXE. Cool. My problem is that I don't have a working Vista to run it.

"If the mountain will not come to Mohammed, Mohammed must go to the mountain". Disk ID is written in the MBR. Armed with dd and mcedit in hex mode, I copy the Disk ID from the 160 GB HDD's MBR into 320 GB, then use fdisk to confirm that they match. Reboot, select "Other" in GRUB....

IT WORKS!!!

Looking back, I'm glad at how painless and inspiring my Vista experience was. Such experiences bring more users to Linux.

UPDATE: I later discovered existence of "ntfsresize" tool under Linux, which apparently does much better job at NTFS resizing than Vista's built-in one.

Linux tc multi-level massive hashing

— Posted by hazard @ 2008-02-10 11:33
It is little known that Linux tc traffic-shaping framework supports multi-depth filter hashing, allowing to reduce CPU load for installations with a lot of filters. Here is how to configure it.

Say, we have an installation with several thousand hosts in 10.1.C.D and 10.2.C.D ranges. First, we create hash table for 10.1.0.0/16:

tc filter add dev eth3 parent 1:0 prio 1 handle 100: protocol ip u32 divisor 256

tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 800:: match ip dst 10.1.0.0/16 hashkey mask 0x0000ff00 at 16 link 100:


This instructs the kernel to create hash table 100 (hex!) with 256 buckets. The next line assigns a filter which would make all traffic with destination IP in 10.1.0.0/16 range be looked up in this hash table ("link 100:") using the the third (C) IP address octet ("hashkey mask 0x0000ff00 at 16").

The next command does the same, but for 10.2.0.0/16:

tc filter add dev eth3 parent 1:0 prio 1 handle 101: protocol ip u32 divisor 256

tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 800:: match ip dst 10.2.0.0/16 hashkey mask 0x0000ff00 at 16 link 101:


Now, we create a hash table for every /24 subnet used inside these /16 ranges. Say, for 10.1.1.0/24:

tc filter add dev eth3 parent 1:0 prio 1 handle 201: protocol ip u32 divisor 256

tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 100:1: match ip dst 10.1.1.0/24 hashkey mask 0x000000ff at 16 link 201:


First line creates a hash table 201 with 256 buckets. The second line is more complex: "ht 100:1:" means that this filter is to be placed into hash table 100, bucket 1 (hex). So, considering the filter for hash table 100, that means this rule will be evaluate when the third (C) octet of the IP address matches 1, e.g. 10.1.1.X, and then do a further lookup in the hash table 201 ("link 201:"). "hashkey mask 0x000000ff at 16" means that lookup will happen in table 201 using the fourth (D) octet of the IP address, e.g. 10.1.1.1 goes into table 201 bucket 1, 10.1.1.2 into bucket 2, 10.1.1.3 into bucket 3 etc.

We go on with similar configuration for 10.1.2.0/24, 10.2.1.0/24, 10.2.2.0/24, assigning a unique hash table number for each subnet:

# 10.1.2.0/24
tc filter add dev eth3 parent 1:0 prio 1 handle 202: protocol ip u32 divisor 256

tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 100:2: match ip dst 10.1.2.0/24 hashkey mask 0x000000ff at 16 link 200:
# 10.2.1.0/24
tc filter add dev eth3 parent 1:0 prio 1 handle 203: protocol ip u32 divisor 256

tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 100:1: match ip dst 10.2.1.0/24 hashkey mask 0x000000ff at 16 link 203:
# 10.2.2.0/24
tc filter add dev eth3 parent 1:0 prio 1 handle 204: protocol ip u32 divisor 256

tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 100:2: match ip dst 10.2.1.0/24 hashkey mask 0x000000ff at 16 link 204:


Note that ht values for 10.2.1.0 and 10.1.1.0 are the same ("ht 100:1:"). This is because the third octet is the same, so rules go into the same bucket. For the same reason ht for both 10.2.2.0 and 10.1.2.0 is (100:2:).

The last step is to populate hash tables for the fourth (D) octet, e.g. for 10.1.1.D:

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 201:1: match ip dst 10.1.1.1/32 flowid 1:10

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 201:2: match ip dst 10.1.1.2/32 flowid 1:20

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 201:3: match ip dst 10.1.1.3/32 flowid 1:10

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 201:a: match ip dst 10.1.1.10/32 flowid 1:10



For example, fourth octet of 10.1.1.1 is 1, so the kernel will look for a rule in hash table 201, bucket 1. That's why the first line contains "ht 201:1:". Similarly, for 10.1.1.2 we use "ht 201:2:". Remember, all ht values are in hex. That's why 10.1.1.10 has "ht 201:a:". "flowid 1:10" indicates which class this filter belongs to - probably you are using HTB for shaping and this would be one of its classes (say, gold or bronze).

Apply the same approach to hosts in other subnets:

# Hosts in 10.1.2.0/24
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 202:1: match ip dst 10.1.2.1/32 flowid 1:20

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 202:2: match ip dst 10.1.2.2/32 flowid 1:10

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 202:3: match ip dst 10.1.2.3/32 flowid 1:20
# Hosts in 10.2.1.0/24
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 203:1: match ip dst 10.2.1.1/32 flowid 1:20

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 203:2: match ip dst 10.2.1.2/32 flowid 1:10

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 203:3: match ip dst 10.2.1.3/32 flowid 1:20
# Hosts in 10.2.2.0/24
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 204:1: match ip dst 10.2.2.1/32 flowid 1:10

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 204:2: match ip dst 10.2.2.2/32 flowid 1:20

tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 204:3: match ip dst 10.2.2.3/32 flowid 1:10


Done!


change of paper size in GNOME Evolution aod other GNOME applications

— Posted by hazard @ 2008-02-02 12:51
GNOME applications, such as Evolution and Evince, use locale to figure out the paper size you want. So, to change paper type, you need to export LC_PAPER environment variable containing locale which uses the paper size you are looking for. E.g., for A4 paper:
export LC_PAPER="POSIX"
For Letter paper size:
export LC_PAPER="en_US"
My system locale is en_US and it seems that GNOME is hardcoded to use Letter paper size for this locale, which is wrong for the place I live in - the standard paper size here is A4.

It took me quite a few walks to the office printer to figure out this one...


parprouted 0.7

— Posted by hazard @ 2008-01-27 18:11
I have just released parprouted-0.7. The major new improvement is support for hosts moving accross interfaces, credit for most of that work goes to Norbert Unterberg and Christian Knecht. Another improvement is support for ARM arch, submitted by Zhouzhi'.

Replacing failed hard drive in Linux Software RAID

— Posted by hazard @ 2007-12-28 16:33
Initially I thought it would be a quick midnight maintenance, not taking more than 10 minutes... Oh boy, I was wrong.

Googling around showed absence of any real-life scenarios for Linux software RAID disk replacement. All articles were of the "and now we simulate a disk failure..." category, and on top of that, most of them were outdated. No article seemed to cover the scenario where disk has REALLY failed and system was rebooted after a failure.

Even more surprisingly, it seems that CentOS5/Red Hat Enterprise 5 rescue disks are NOT designed to handle software arrays with any kind of problem. They just refuse to detect problematic arrays and mdadm will not show anything.

To cut a long story short, here is a REAL-LIFE procedure on how to replace a failed disk in Linux software RAID array:
  • Insert the new hard drive (probably your server needs to be turned off when doing that).
  • Boot from a rescue CD.
  • Create a partition table on the new drive so that all partitions are in the same order and sizes as partitions on the working drive.
  • Set RAID partition type as Linux (83), not Linux raid auto (fd). THIS IS VERY IMPORTANT AND IS OPPOSITE OF INSTRUCTIONS YOU CAN FIND ELSEWHERE. Otherwise your Linux system won't boot.
  • Now boot system Linux from the working hard drive (I hope you had bootloader installed on it, otherwise install it).
  • Add the new hard drive into the array:
    mdadm [MD-device] -a [new-HDD-device]
    For example,
    mdadm /dev/md0 -a /dev/sdb1
  • Check that hard disk was successfully added using
    mdadm -Q --detail [MD-device]
    Among other things it should say something like "reconstructing 0%".
  • Now, run fdisk, and change RAID partition type to Linux raid auto (fd).
  • If everything went fine until here, consider yourself lucky. :)


parprouted 0.65

— Posted by hazard @ 2007-08-26 07:44
I have just released parprouted 0.65. It fixes a problem with incorrect handling of a case when there are two entries for the same IP, incomplete and correct one. This can happen when machine running parprouted is rebooted. Credit for the fix goes to Dennis Borgmann and Matthias Huning.

Copying phone book from one smartphone to another using Linux, multisync and SyncML

— Posted by hazard @ 2007-08-14 05:13
I've got a Nokia E90 for testing, and therefore was faced with a task to copy phone book from my old Motorola A780 to it. It took me a while to figure it out, and below you can find the quick(?) & dirty way to do it.
  • You will need to have some sort of IP connectivity between your phones and PC. I successfully used GPRS and WiFi.
  • Compile and install wbxml2-0.9.0 with Nokia bug workaround. You can download it here. I had to copy wbxml* include files from /usr/include to /usr/include/wbxml2 to make configure script from multisync to recognize that wbxml2 is installed.
  • Download CVS snapshot of multisync, and patch SyncML plugin to send "ADD" command instead of "REPLACE". This is not a bugfix, but a nasty hack: multisync's backup plugin performs restores using REPLACE command, and Nokia E90 discards all contacts if it doesn't have them. You can find the patched multisync source that I used here.
  • Compile and install multisync. Note that you have to do "make install" also in the SyncML & Backup plugin directories, not only in the multisync source directory.
  • Start up multisync using the following command: "MULTISYNC_DEBUG=1 multisync" (that will enable debug output to the console).
  • Configure a pair with SyncML and Backup plugins. I was interested in nothing else but the phone book, so I ticked only "Addressbook" in the "Data types to synchronize".
  • Configure SyncML plugin (Options... button) to use http protocol, enable "disable string tables" option.
  • Enter a directory in the Backup plugin options.
  • Now configure SyncML on your phones (usually somewhere from within a Sync application). Make sure that phone book database name is specified as "addressbook" (case sensitive!) and configure the phones to use the same URL and port as in SyncML plugin configuration.
  • Make sure that your phones can reach your PC. Alter iptables configuration etc.
  • Proceed to synchronize your OLD phone to multisync. Watch the debug output in the console from which you started multisync.
  • Once synchronization is done, go into Backup plugin options, and click "Restore All".
  • Proceed to synchronzie your NEW phone to multisync. You may get an alert saying that there is loss of integrity between the phone and multisync's database, confirm that you want to proceed with the restore anyway.
  • The end. If it doesn't work, check multisync's debug (which unfortunately is not very useful), and double check that you have entered settings properly in your phone. tcpdump may also be useful to see if there is communication between the phone and your PC.


Matching range of values using tc u32

— Posted by hazard @ 2007-03-07 16:00
Very few people realize that Linux's tc u32 filter allows you to match range of values. Basicly the logic is very similar to IP address/netmask matching.

For example, this u32 rule will match all source ports in the port range 0-1023: match ip sport 0x0 0xfc00

All you need is some simple binary arithmetic. Remember that u32 does an AND operation of the parameter against the mask, and then compares it with the target value. 1023 is 0x3FF in hex, after inversion (XOR 0xffff) we get 0xfc00. So, 0xfc00 is 1111110000000000 in binary, and if we use it as a mask it will mean that any bits with values below 1024 will be discarded. 0x0 (zero) is the target value that u32 will compare against. Any port values above or equal 1024 will not have a zero value after "[port] AND 0xfc00" binary arithmetic operation is done, so the filter will match only the ports below 1024.

ciscobackup 1.0

— Posted by hazard @ 2007-02-17 16:48
I have released ciscobackup 1.0, a handy utility script for secure backup of Cisco IOS-based routers and other devices configuration via SSH. The script depends on Net::SSH::Perl module, make sure to look at README file before installing.

A Simple Solution to fight SPAM

— Posted by hazard @ 2007-01-21 09:45
According to the following article, amount of SPAM has almost doubled during past year. Most of SPAM is sent from PCs infected by trojans.

Actually it is relatively easy for an Internet Service Provider to prevent this type of SPAM from being sent from its customers. Over here in PrimeTel (Cyprus ISP) we do not allow home users to connect to third-party Internet SMTP servers by default, and developed an add-on to our mailserver so that any IP address which has sent more than threshold amount of emails during past hour will be blocked automatically.

If mail server vendors/developers will start including this feature in standard configurations, the problem will be gone.


VirtualBox networking and parprouted

— Posted by hazard @ 2007-01-20 19:14
VirtualBox is a nice, free and semi-open source virtual machine, allowing you to run WinXP on Linux pretty much the same way as with VMWare.

The only documented methods to do networking from VM are NAT and Host-based bridge using Linux layer-2 bridging (brctl). NAT was no good for me as I need incoming connections to work. Layer-2 bridge is no good if you don't have multiple-MAC capable networking card (e.g. wireless connection), or if you want to filter IP packets coming out from the VM on the host using iptables AND without allocation of additional subnet.

I had the latter case - I have a /29 on my Internet connection with six usable IPs, and splitting into two /30 subnets would mean that I would end up with only two usable IPs. Surely enough, parprouted comes to rescue. :) Below are the steps to set it up.

    Get tunctl utility - comes together with UML utilities. In my case I just downloaded one of the binary RPMs on the net and copied tunctl to /usr/local/sbin.
  • Download and install parprouted.
  • Add the following to your /etc/rc.d/rc.local as follows:
    /usr/local/sbin/tunctl -t tap0
    /sbin/ip link set tap0 up
    /sbin/ip addr add 172.16.16.16/32 dev tap0
    /usr/local/sbin/parprouted eth0 tap0
    
  • Make sure IP forwarding is enabled on the host (/etc/sysctl.conf in Fedora).
  • Restart your PC (or execute commands manually).
  • Go to VM settings in VirtualBox, in "Networking" select Host-based. In the device name field, enter "tap0".
  • Configure your WinXP (or whatever OS you run in the VM) to use one of the IP addresses from the subnet assigned to your eth0 interface.
  • That's it - test your connectivity. Make sure that iptables configuration on your host is not filtering out packets sent from the VM.


WISP-Dist 1.3.1p35 declared stable

— Posted by hazard @ 2007-01-14 11:44
WISP-Dist 1.3.1p35, which has been beta since 2005, has been declared stable. Of course it is quite outdated right now as it doesn't have reliable Atheros support, I wish I had time to update it. :-(

parprouted 0.64

— Posted by hazard @ 2007-01-14 11:39
I made a new release of parprouted, which fixes a bug where it sent ARP requests for incomplete records. Thanks to Ben Pfountz for the patch.

OpenVZ hints

— Posted by hazard @ 2007-01-14 05:29
Here are a few useful hints I learned while I was setting up VPS for hazardous-area.org:

  • Default mysql+apache installation WON'T fit into 128 Mb of memory on OpenVZ VPS, and you'll get memory allocation errors. The core of the issue is that OpenVZ apparently takes into account all memory allocated by processes, even if it hasn't been used. To overcome the problem you need to minimize allocated memory use:
    • Use minimal mysqld config (copy my-small.cnf from /usr/share/doc/mysql-server*).
    • Add "skip-innodb" to my.cnf - it will reduce MySQL server memory footprint by about 100 MB.
    • Use lighttpd instead of Apache.
  • openvzmon is a nifty tool which gives more realistic report on CPU and memory usage by your OpenVZ VPS.
  • Also you can get useful stats from /proc/user_beancounters.


installing Fedora Core 6 Linux on HP DX7300

— Posted by hazard @ 2007-01-05 23:35
If you try to install FC6 on HP DX7300 workstation, kernel will hang during boot. After some fiddling I figured out that you need to pass the following command line in the boot prompt to resolve the problem: "linux pci=nommconf".

Silver Needle in the Skype

— Posted by hazard @ 2006-04-30 00:26
I found a very interesting paper on Skype presented on Black Hat Europe 2006 conference. Scary too. I guess secure communication is one of the clear cases where open source software has an inherent advantage for the user...

ivtv & AMD 760MP chipset

— Posted by hazard @ 2006-03-06 05:36
I spent a lot of wondering why several WinTV PVR 150 cards didn't work together properly under Linux ivtv driver - video was getting stuck after 15-30 minutes. Well after I plugged them into an old COMPAQ DL360 server with Intel motherboard the problem disappeared. The conclusion is AMD 760MP and multiple WinTV PVR 150 do not live together.

Comparison of C, C++, Java, Perl, Python, Rexx, and Tcl

— Posted by hazard @ 2005-11-28 00:24
I encountered an interesting paper - "An emperical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl for a search/string processing program". Basicly the same task was given to a number of programmers, and then various parameters such as time to analyze the task, write the code, reliability, performance, memory consumption etc were compared. The results are in line of what I would expect for the specific task. Perl got the best score for the time to develop the code, while C/C++ had the best performance and memory footprint. Java performance was lower than I would expect, but that can be explained that the report is circa 2000 and nowadays JVMs should have much better performance.

Sony's DRM rootkit

— Posted by hazard @ 2005-11-16 01:20
Just in case you didn't know... Sony is in such despair to stop their customers from ripping audio CDs that they included a rootkit as part of bundled DRM software. You see, as soon as Sony's customer installs the player software from his legitimately purchased CD, he would not only have a player on his system, but also a rootkit hidden deeply inside Windows! Fantastic.

Everything was fine and dandy until Sysinternals discovered and blogged about it.

Following a public outcry, Sony released an "uninstaller" for the rootkit. However, as Sysinternals found, what it does is installation of an updated copy of DRM... Moreover, you have to fill in a couple of forms to get the uninstaller...

Already a few trojans have appeared that utilize cloaking capabilities provided by the rootkit. Obviously writers of these trojans didn't read Sony's press release, in which they would have learned that Sony DRM software does not present any security risk for customer's PC.

So, that's it? Nah. The player software is nice enough to access Sony's website each time you run it. Sure, having some extra information about your customers wouldn't hurt.

Now, imagine that rootkit was developed/distributed not by Sony, but by some teenage guy. Would he still be around?


Wise words

— Posted by hazard @ 2005-10-07 10:35
A "spec" is close to useless. I have _never_ seen a spec that was both big enough to be useful _and_ accurate.

And I have seen _lots_ of total crap work that was based on specs. It's _the_ single worst way to write software, because it by definition means that the software was written to match theory, not reality.


-- Linus Torvalds
(Full post)

Tiny patch to make dv2dv accept transcode DV files

— Posted by hazard @ 2005-05-03 04:02
When fed an .AVI file made by transcode, dv2dv exits with a confusing "cannot open file" error. In fact, it doesn't recognize the AVI file being as a DV one. I made a small patch to fix it.

Google rules, as usual

— Posted by hazard @ 2005-04-29 15:03
One problem I encountered when using Opera on my Motorola A780 is that when I go to Google it responds in WML. Considering that Opera is fully HTML compliant, this is very inconvinient, as you can't directly go to websites or use Google Images. So about a week ago I found Google's problem reporting form and described the issue there. Yesterday I got a reply - they are working on a fix, and as a workaround, I can use http://www.google.com/webhp?hl=en, which forces Google to answer in HTML.

mod_dav_svn and mod_auth_pam combo unstable

— Posted by hazard @ 2005-04-20 22:26
I was investigating strange "Connection truncated" errors when Subversion clients were connecting through WEBDAV. It turned out Apache's mod_dav_svn and mod_auth_pam do not interoperate very well and results in httpd crashing with SIGALARM and "*** glibc detected *** free(): invalid pointer:" errors in the error_log.

In my case the solution was to switch to mod_auth_ldap and the crashes were gone.

Caching of DocBook DTDs

— Posted by hazard @ 2005-04-20 22:21
To my surprise I have discovered that whenever I create a PDF from DocBook using docbook2pdf, it fetches DTDs from the Net. This process makes PDF generation slower by 3-4 times. Some Googling resulted in buildDocBookCatalog script which can be downloaded from xmlsoft.org. It creates a catalog file which contains mapping from URLs to locally cached DTD files.