I have hacked together a Smokeping plugin for Juniper JunOS devices that supports VRFs (routing-instance) and logical systems.
- Download the plugin to your server.
- Install by copying the file into lib/Smokeping/probes directory under your smokeping installation (e.g. to /opt/smokeping/lib/Smokeping/probes).
- You might also have to install Net::OpenSSH Perl module, if it's not already installed (check by running "perl -e 'use Net::OpenSSH'").
- Add the following section to your smokeping config:
packetsize = [in JunOS 1472 is the max for 1500 L3 MTU]
junospass = [pass]
junosuser = [user]
# feel free to change params below as you wish
forks = 5
offset = 50%
timeout = 15
step = 120
- Individual targets are configured as follows:
probe = OpenSSHJunOSPing
menu = [menu name]
title = [title]
host = [destination IP to ping from JunOS device]
pings = [numer of ICMP pings to send, e.g. 5]
source = [JunOS device to login into]
logicalsystem = [logical system name, optional]
vrf = [routing-instance name, optional]
- ssh to the JunOS device once from the commmand line from the account of the user who is running smokeping (su -s /bin/sh [username]). On the first connect ssh will ask to add the new host to its known_hosts file, confirm it. Otherwise Smokeping will fail to login as the ssh key of your JunOS box is not in the known_hosts file.
I've decided to upgrade my FC14 to FC16. Along the way, I decided to do it using a method that I've never used before - preupgrade. Supposedly it's one of the easiest and less time consuming methods. Not in Cyprus ... (More)
A few days ago I was asked to help with a DDoS attack against a website. The DDoS itself was pretty generic, a small zombie network hammering particular URLs from the websites with GET requests. The websites were running on Apache, and despite that the target page was static, the DDoS was bringing Apache to its knees. System administrators tried to utilize various Apache modules and configuration tricks to protect against DDoS, but to no avail.
There was only one solution to this on my mind - install nginx. And that really helped. nginx is asynchronous by design and therefore handles load much much better. Whilst Apache was failing with several hundred simultaneous connections, nginx easily scaled to 10k caused by DDoS, whilst using only 20% CPU.
The first website was completely moved to nginx, with PHP being served through PHP/FastCGI. For the second website, the nginx was configured in proxy mode, so that it would forward all requests to the Apache, whilst enforcing limits against DDoS - 1 unique page request per IP per second, as well as blocking certain user agents. Below is an example configuration I created, relevant for CentOS/RHEL.
We had an issue with a database server running Greenplum (commercial Postgresql for large-scale datawarehousing). Greenplum was starting, but attempts to do anything in the database were resulting in the following errors:
INFO: Crash recovery broadcast of the distributed transaction 'Commit Prepared' broadcast succeeded for gid = 1265880453-0032866370.
INFO: Crash recovery broadcast of the distributed transaction 'Abort Prepared' broadcast succeeded for gid = 1265880453-0032866371��C
psql: FATAL: DTM Log recovery failed. There are still unresolved in-doubt transactions on some of the segment databaes that were not able to be resolved for an unknown reason. (cdbtm.c:2829)
DETAIL: Here is a list of in-doubt transactions in the system: List of In-doubt transactions remaining across the segdbs: ("1265880453-0032866371��C" , )
HINT: Try restarting the Greenplum Database array. If the problem persists an Administrator will need to resolve these transactions manually.
Of course, manuals/forums/Google did not provide any useful ideas as regards to how 'to resolve these transactions manually'. Morever there was no backup handy (and the db was huge). I didn't care about two lost transactions, I just wanted to start the database. After an hour of attempts, eventually I succeeded. The trick was to delete files from pg_twophase/ subdirectories.
I'm blogging this in the hope that when somebody else faces this problem, he would be able to find this post through Google, saving his nerves. :-)
In case someone wants to run a recent Linux kernel on an FC8 box, I have made an RPM for 220.127.116.11 and you can download it here
. Should also install on CentOS 5/RHEL 5 if you use --force.
Shutdown of a single ISP, McColo, has reduced world spam levels by 70%
. Check the full story here
I recently purchased HP F4180
printer/scanner/copier for 50 EUR. It is amazing how inexpensive these things are nowadays.
As regards to Linux compatibility, I must say that it is very good and everything worked from the first attempt. Simply download HPLIP
and all configuration is done automatically (at least on FC8). Printing works using CUPS, scanning using xsane. Thumbs up to HP!
My colleague received an IBM-Lenovo X61 laptop, and we found that built-in Intel 4965 wireless is very slow when connected to 802.11b access point. Same problem persisted both with Fedora Core 8 and Ubuntu (iwlwifi driver).
Usual tweaking of ACPI and APIC parameters didn't help. Googling around showed a lot of people suffering from the same problem with Intel 4965 wireless cards. Eventually I resolved the problem by removing iwlwifi driver files from /lib/modules, and installing ndiswrapper (allows to use Windows network drivers) + Intel 4965 driver for Windows XP.
My old desktop machine had died, so I decided to buy a new laptop to replace it. Even though most of the time it will be sitting in the same place, laptops nowadays are cheap and mobility is a nice option to have.
So, I got an HP Compaq 6710b. Along the way I purchased upgrade to 2 GB RAM and 320 GB hard drive. The laptop by default comes with Vista, which I wanted to keep, just in case I need to run some Windows stuff which wouldn't work in Linux. At first I start the laptop with its factory 160 GB HDD: HP's Vista installer loads from a special partition and in less than two hours I have a running Vista (enough to install Fedora two times). Then I proceed to make recovery DVD, so that I can install Vista on the 320 GB disk. Vista needs only two DVDs and another hour to do that; excellent. I replaced the hard drive and started the whole Vista installation process again from the DVD.
Of course, HP's Vista installer takes over entire disk space without asking, I'm sure only to make the experience more user-friendly for the user (after all, these "Advanced" buttons are way too tricky). The thing is, I want to give only 50 GB to Vista. Anyway, Vista gets installed, and I'm logged in. I start the Disk Management tool - actually, I already got irritated at this point because Microsoft thought it is a good idea to break old ways of using Windows - and whoala, it tells me that I can shrink my disk to 160 GB. To those of you who don't know, actual space used by Vista files is less than 20 GB.
The tool also vaguely mentions that I can get rid of shadow copies and paging file in order to increase available space. Of course it is useless to use built-in Help to find how do I actually do that, but thanks to Microsoft's competitor Google, this information was retrieved and necessary actions were performed. Now, I can shrink my drive by 3-4 GBs more. Fantastic! Just what I dreamed of, to buy 320 GB HDD and leave 155 GB for Vista. Googling around shows that what I got is normal for Windows disk shrinking tool - it frees about 50% of space.
"Screw it!", I think to myself, and proceed to install Fedora 8. As a precaution, I left first partition empty at 60 GB, so that I can try to install Windows again later (of course, I was also understanding that doing standalone Vista/XP install would be painful because it would not have the HP drivers). Anyway, in one hour FC8 is up and running - "nohz=off" was needed to make the Fedora installer work.
However, built-in modem is NOT up and running, as it is one of the softmodems for which Agere has not released drivers. I find a few suggestions on the mailing lists that making a Frankenstein driver by copying .o files from one driver to another might work, but I only get a few OOPS'es as a result and NOT a working modem.
Now, I need the modem, because I have to connect to remote console servers over the phone for troubleshooting of network outages. So, I insert Vista rescue DVD again, hoping that I might have missed an option to install into a specific partition. Nope, not there. And it also overwrote my MBR without asking. How nice.
I look back at my 160 GB drive. What if I shrink that one and then transplant Vista to my 320 GB HDD? In a few minutes my 160 GB is back in the laptop, I run Windows shrinking tool, and whoala, it shrinked it down to a whopping 45 GB. A few minutes more, and 160 GB is in USB enclosure, 320 GB is back inside the laptop, and dd is happily copying first partition from 160 GB drive to 320 GB one. One hour passes; dd has finished, not as quickly as I expected - only 7 MB/s. Anyway, let's try to boot Vista... drums roll (in my head)... Vista's loading bar starts to run around the screen... KABOOM, "winload.exe is missing or corrupted".
Back to Linux, mount the Vista partition. winload.exe is there and MD5 is the same as the original one on the 160 GB HDD. Also I can't find boot.ini anywhere. Hmmm. Time for another visit to Microsoft's competitor Google. Aha - Microsoft is improving at friendly error messages: "winload.exe is missing or corrupted" actually means that disk ID has changed, and just to make it easier for the average Joe to use Vista, Microsoft has started checking that disk ID entered into the bootloader config matches the one on the actual drive. Otherwise it won't boot, even if everything else is in place.
Alright, let's see how we can fix that bootloader config. Another innovation! Finally Microsoft has managed to get rid of that prehistoric way of configuring bootloader using a text boot.ini file! Now we have a shiny new registry-like binary database somewhere else. To edit it, use BCDEDIT.EXE. Cool. My problem is that I don't have a working Vista to run it.
"If the mountain will not come to Mohammed, Mohammed must go to the mountain". Disk ID is written in the MBR. Armed with dd and mcedit in hex mode, I copy the Disk ID from the 160 GB HDD's MBR into 320 GB, then use fdisk to confirm that they match. Reboot, select "Other" in GRUB....
Looking back, I'm glad at how painless and inspiring my Vista experience was. Such experiences bring more users to Linux.
UPDATE: I later discovered existence of "ntfsresize" tool under Linux, which apparently does much better job at NTFS resizing than Vista's built-in one.
It is little known that Linux tc traffic-shaping framework supports multi-depth filter hashing, allowing to reduce CPU load for installations with a lot of filters. Here is how to configure it.
Say, we have an installation with several thousand hosts in 10.1.C.D and 10.2.C.D ranges. First, we create hash table for 10.1.0.0/16:
tc filter add dev eth3 parent 1:0 prio 1 handle 100: protocol ip u32 divisor 256
tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 800:: match ip dst 10.1.0.0/16 hashkey mask 0x0000ff00 at 16 link 100:
This instructs the kernel to create hash table 100 (hex!) with 256 buckets. The next line assigns a filter which would make all traffic with destination IP in 10.1.0.0/16 range be looked up in this hash table ("link 100:") using the the third (C) IP address octet ("hashkey mask 0x0000ff00 at 16").
The next command does the same, but for 10.2.0.0/16:
tc filter add dev eth3 parent 1:0 prio 1 handle 101: protocol ip u32 divisor 256
tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 800:: match ip dst 10.2.0.0/16 hashkey mask 0x0000ff00 at 16 link 101:
Now, we create a hash table for every /24 subnet used inside these /16 ranges. Say, for 10.1.1.0/24:
tc filter add dev eth3 parent 1:0 prio 1 handle 201: protocol ip u32 divisor 256
tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 100:1: match ip dst 10.1.1.0/24 hashkey mask 0x000000ff at 16 link 201:
First line creates a hash table 201 with 256 buckets. The second line is more complex: "ht 100:1:" means that this filter is to be placed into hash table 100, bucket 1 (hex). So, considering the filter for hash table 100, that means this rule will be evaluate when the third (C) octet of the IP address matches 1, e.g. 10.1.1.X, and then do a further lookup in the hash table 201 ("link 201:"). "hashkey mask 0x000000ff at 16" means that lookup will happen in table 201 using the fourth (D) octet of the IP address, e.g. 10.1.1.1 goes into table 201 bucket 1, 10.1.1.2 into bucket 2, 10.1.1.3 into bucket 3 etc.
We go on with similar configuration for 10.1.2.0/24, 10.2.1.0/24, 10.2.2.0/24, assigning a unique hash table number for each subnet:
tc filter add dev eth3 parent 1:0 prio 1 handle 202: protocol ip u32 divisor 256
tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 100:2: match ip dst 10.1.2.0/24 hashkey mask 0x000000ff at 16 link 200:
tc filter add dev eth3 parent 1:0 prio 1 handle 203: protocol ip u32 divisor 256
tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 100:1: match ip dst 10.2.1.0/24 hashkey mask 0x000000ff at 16 link 203:
tc filter add dev eth3 parent 1:0 prio 1 handle 204: protocol ip u32 divisor 256
tc filter add dev eth3 protocol ip parent 1:0 prio 1 u32 ht 100:2: match ip dst 10.2.1.0/24 hashkey mask 0x000000ff at 16 link 204:
Note that ht values for 10.2.1.0 and 10.1.1.0 are the same ("ht 100:1:"). This is because the third octet is the same, so rules go into the same bucket. For the same reason ht for both 10.2.2.0 and 10.1.2.0 is (100:2:).
The last step is to populate hash tables for the fourth (D) octet, e.g. for 10.1.1.D:
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 201:1: match ip dst 10.1.1.1/32 flowid 1:10
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 201:2: match ip dst 10.1.1.2/32 flowid 1:20
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 201:3: match ip dst 10.1.1.3/32 flowid 1:10
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 201:a: match ip dst 10.1.1.10/32 flowid 1:10
For example, fourth octet of 10.1.1.1 is 1, so the kernel will look for a rule in hash table 201, bucket 1. That's why the first line contains "ht 201:1:". Similarly, for 10.1.1.2 we use "ht 201:2:". Remember, all ht values are in hex. That's why 10.1.1.10 has "ht 201:a:". "flowid 1:10" indicates which class this filter belongs to - probably you are using HTB for shaping and this would be one of its classes (say, gold or bronze).
Apply the same approach to hosts in other subnets:
# Hosts in 10.1.2.0/24
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 202:1: match ip dst 10.1.2.1/32 flowid 1:20
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 202:2: match ip dst 10.1.2.2/32 flowid 1:10
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 202:3: match ip dst 10.1.2.3/32 flowid 1:20
# Hosts in 10.2.1.0/24
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 203:1: match ip dst 10.2.1.1/32 flowid 1:20
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 203:2: match ip dst 10.2.1.2/32 flowid 1:10
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 203:3: match ip dst 10.2.1.3/32 flowid 1:20
# Hosts in 10.2.2.0/24
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 204:1: match ip dst 10.2.2.1/32 flowid 1:10
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 204:2: match ip dst 10.2.2.2/32 flowid 1:20
tc filter add dev eth3 parent 1:0 protocol ip prio 1 u32 ht 204:3: match ip dst 10.2.2.3/32 flowid 1:10
GNOME applications, such as Evolution and Evince, use locale to figure out the paper size you want. So, to change paper type, you need to export LC_PAPER environment variable containing locale which uses the paper size you are looking for. E.g., for A4 paper:
For Letter paper size:
My system locale is en_US and it seems that GNOME is hardcoded to use Letter paper size for this locale, which is wrong for the place I live in - the standard paper size here is A4.
It took me quite a few walks to the office printer to figure out this one...
I have just released parprouted-0.7
. The major new improvement is support for hosts moving accross interfaces, credit for most of that work goes to Norbert Unterberg and Christian Knecht. Another improvement is support for ARM arch, submitted by Zhouzhi'.
Initially I thought it would be a quick midnight maintenance, not taking more than 10 minutes... Oh boy, I was wrong.
Googling around showed absence of any real-life scenarios for Linux software RAID disk replacement. All articles were of the "and now we simulate a disk failure..." category, and on top of that, most of them were outdated. No article seemed to cover the scenario where disk has REALLY failed and system was rebooted after a failure.
Even more surprisingly, it seems that CentOS5/Red Hat Enterprise 5 rescue disks are NOT
designed to handle software arrays with any kind of problem. They just refuse to detect problematic arrays and mdadm will not show anything.
To cut a long story short, here is a REAL-LIFE procedure on how to replace a failed disk in Linux software RAID array:
- Insert the new hard drive (probably your server needs to be turned off when doing that).
- Boot from a rescue CD.
- Create a partition table on the new drive so that all partitions are in the same order and sizes as partitions on the working drive.
- Set RAID partition type as Linux (83), not Linux raid auto (fd). THIS IS VERY IMPORTANT AND IS OPPOSITE OF INSTRUCTIONS YOU CAN FIND ELSEWHERE. Otherwise your Linux system won't boot.
- Now boot system Linux from the working hard drive (I hope you had bootloader installed on it, otherwise install it).
- Add the new hard drive into the array:
mdadm [MD-device] -a [new-HDD-device]
mdadm /dev/md0 -a /dev/sdb1
- Check that hard disk was successfully added using
mdadm -Q --detail [MD-device]Among other things it should say something like "reconstructing 0%".
- Now, run fdisk, and change RAID partition type to Linux raid auto (fd).
- If everything went fine until here, consider yourself lucky. :)
I have just released parprouted 0.65
. It fixes a problem with incorrect handling of a case when there are two entries for the same IP, incomplete and correct one. This can happen when machine running parprouted is rebooted. Credit for the fix goes to Dennis Borgmann and Matthias Huning.
I've got a Nokia E90 for testing, and therefore was faced with a task to copy phone book from my old Motorola A780 to it. It took me a while to figure it out, and below you can find the quick(?) & dirty way to do it.
- You will need to have some sort of IP connectivity between your phones and PC. I successfully used GPRS and WiFi.
- Compile and install wbxml2-0.9.0 with Nokia bug workaround. You can download it here. I had to copy wbxml* include files from /usr/include to /usr/include/wbxml2 to make configure script from multisync to recognize that wbxml2 is installed.
- Download CVS snapshot of multisync, and patch SyncML plugin to send "ADD" command instead of "REPLACE". This is not a bugfix, but a nasty hack: multisync's backup plugin performs restores using REPLACE command, and Nokia E90 discards all contacts if it doesn't have them. You can find the patched multisync source that I used here.
- Compile and install multisync. Note that you have to do "make install" also in the SyncML & Backup plugin directories, not only in the multisync source directory.
- Start up multisync using the following command: "MULTISYNC_DEBUG=1 multisync" (that will enable debug output to the console).
- Configure a pair with SyncML and Backup plugins. I was interested in nothing else but the phone book, so I ticked only "Addressbook" in the "Data types to synchronize".
- Configure SyncML plugin (Options... button) to use http protocol, enable "disable string tables" option.
- Enter a directory in the Backup plugin options.
- Now configure SyncML on your phones (usually somewhere from within a Sync application). Make sure that phone book database name is specified as "addressbook" (case sensitive!) and configure the phones to use the same URL and port as in SyncML plugin configuration.
- Make sure that your phones can reach your PC. Alter iptables configuration etc.
- Proceed to synchronize your OLD phone to multisync. Watch the debug output in the console from which you started multisync.
- Once synchronization is done, go into Backup plugin options, and click "Restore All".
- Proceed to synchronzie your NEW phone to multisync. You may get an alert saying that there is loss of integrity between the phone and multisync's database, confirm that you want to proceed with the restore anyway.
- The end. If it doesn't work, check multisync's debug (which unfortunately is not very useful), and double check that you have entered settings properly in your phone. tcpdump may also be useful to see if there is communication between the phone and your PC.
Very few people realize that Linux's tc u32 filter allows you to match range of values. Basicly the logic is very similar to IP address/netmask matching.
For example, this u32 rule will match all source ports in the port range 0-1023: match ip sport 0x0 0xfc00
All you need is some simple binary arithmetic. Remember that u32 does an AND operation of the parameter against the mask, and then compares it with the target value. 1023 is 0x3FF in hex, after inversion (XOR 0xffff) we get 0xfc00. So, 0xfc00 is 1111110000000000 in binary, and if we use it as a mask it will mean that any bits with values below 1024 will be discarded. 0x0 (zero) is the target value that u32 will compare against. Any port values above or equal 1024 will not have a zero value after "[port] AND 0xfc00" binary arithmetic operation is done, so the filter will match only the ports below 1024.
I have released ciscobackup
1.0, a handy utility script for secure backup of Cisco IOS-based routers and other devices configuration via SSH. The script depends on Net::SSH::Perl module, make sure to look at README file before installing.
According to the following article
, amount of SPAM has almost doubled during past year. Most of SPAM is sent from PCs infected by trojans.
Actually it is relatively easy for an Internet Service Provider to prevent this type of SPAM from being sent from its customers. Over here in PrimeTel
(Cyprus ISP) we do not allow home users to connect to third-party Internet SMTP servers by default, and developed an add-on to our mailserver so that any IP address which has sent more than threshold amount of emails during past hour will be blocked automatically.
If mail server vendors/developers will start including this feature in standard configurations, the problem will be gone.
is a nice, free and semi-open source virtual machine, allowing you to run WinXP on Linux pretty much the same way as with VMWare.
The only documented methods to do networking from VM are NAT and Host-based bridge using Linux layer-2 bridging (brctl). NAT was no good for me as I need incoming connections to work. Layer-2 bridge is no good if you don't have multiple-MAC capable networking card (e.g. wireless connection), or if you want to filter IP packets coming out from the VM on the host using iptables AND without allocation of additional subnet.
I had the latter case - I have a /29 on my Internet connection with six usable IPs, and splitting into two /30 subnets would mean that I would end up with only two usable IPs.
Surely enough, parprouted
comes to rescue. :) Below are the steps to set it up.
Get tunctl utility - comes together with UML utilities. In my case I just downloaded one of the binary RPMs on the net and copied tunctl to /usr/local/sbin.
- Download and install parprouted.
- Add the following to your /etc/rc.d/rc.local as follows:
/usr/local/sbin/tunctl -t tap0
/sbin/ip link set tap0 up
/sbin/ip addr add 172.16.16.16/32 dev tap0
/usr/local/sbin/parprouted eth0 tap0
- Make sure IP forwarding is enabled on the host (/etc/sysctl.conf in Fedora).
- Restart your PC (or execute commands manually).
- Go to VM settings in VirtualBox, in "Networking" select Host-based. In the device name field, enter "tap0".
- Configure your WinXP (or whatever OS you run in the VM) to use one of the IP addresses from the subnet assigned to your eth0 interface.
- That's it - test your connectivity. Make sure that iptables configuration on your host is not filtering out packets sent from the VM.
1.3.1p35, which has been beta since 2005, has been declared stable
. Of course it is quite outdated right now as it doesn't have reliable Atheros support, I wish I had time to update it. :-(
I made a new release of parprouted
, which fixes a bug where it sent ARP requests for incomplete records. Thanks to Ben Pfountz for the patch.
Here are a few useful hints I learned while I was setting up VPS for hazardous-area.org:
- Default mysql+apache installation WON'T fit into 128 Mb of memory on OpenVZ VPS, and you'll get memory allocation errors. The core of the issue is that OpenVZ apparently takes into account all memory allocated by processes, even if it hasn't been used. To overcome the problem you need to minimize allocated memory use:
- Use minimal mysqld config (copy my-small.cnf from /usr/share/doc/mysql-server*).
- Add "skip-innodb" to my.cnf - it will reduce MySQL server memory footprint by about 100 MB.
- Use lighttpd instead of Apache.
- openvzmon is a nifty tool which gives more realistic report on CPU and memory usage by your OpenVZ VPS.
- Also you can get useful stats from /proc/user_beancounters.
If you try to install FC6 on HP DX7300 workstation, kernel will hang during boot. After some fiddling I figured out that you need to pass the following command line in the boot prompt to resolve the problem: "linux pci=nommconf".
I found a very interesting paper
on Skype presented on Black Hat Europe 2006 conference. Scary too. I guess secure communication is one of the clear cases where open source software has an inherent advantage for the user...
I spent a lot of wondering why several WinTV PVR 150 cards didn't work together properly under Linux ivtv driver - video was getting stuck after 15-30 minutes. Well after I plugged them into an old COMPAQ DL360 server with Intel motherboard the problem disappeared. The conclusion is AMD 760MP and multiple WinTV PVR 150 do not live together.
I encountered an interesting paper - "An emperical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl for a search/string processing program"
. Basicly the same task was given to a number of programmers, and then various parameters such as time to analyze the task, write the code, reliability, performance, memory consumption etc were compared. The results are in line of what I would expect for the specific task. Perl got the best score for the time to develop the code, while C/C++ had the best performance and memory footprint. Java performance was lower than I would expect, but that can be explained that the report is circa 2000 and nowadays JVMs should have much better performance.
Just in case you didn't know... Sony is in such despair to stop their customers from ripping audio CDs that they included a rootkit as part of bundled DRM software. You see, as soon as Sony's customer installs the player software from his legitimately purchased CD, he would not only have a player on his system, but also a rootkit hidden deeply inside Windows! Fantastic.
Everything was fine and dandy until Sysinternals discovered and blogged about it
Following a public outcry, Sony released an "uninstaller" for the rootkit. However, as Sysinternals found, what it does is installation
of an updated copy of DRM... Moreover, you have to fill in a couple of forms to get the uninstaller...
Already a few trojans have appeared that utilize cloaking capabilities provided by the rootkit. Obviously writers of these trojans didn't read Sony's press release, in which they would have learned that Sony DRM software does not present any security risk for customer's PC.
So, that's it? Nah. The player software is nice enough to access Sony's website each time you run it. Sure, having some extra information about your customers wouldn't hurt.
Now, imagine that rootkit was developed/distributed not by Sony, but by some teenage guy. Would he still be around?
A "spec" is close to useless. I have _never_ seen a spec that was both
big enough to be useful _and_ accurate.
And I have seen _lots_ of total crap work that was based on specs. It's
_the_ single worst way to write software, because it by definition means
that the software was written to match theory, not reality.
-- Linus Torvalds
When fed an .AVI file made by transcode, dv2dv exits with a confusing "cannot open file" error. In fact, it doesn't recognize the AVI file being as a DV one. I made a small patch
to fix it.
One problem I encountered when using Opera on my Motorola A780 is that when I go to Google it responds in WML. Considering that Opera is fully HTML compliant, this is very inconvinient, as you can't directly go to websites or use Google Images. So about a week ago I found Google's problem reporting form and described the issue there. Yesterday I got a reply - they are working on a fix, and as a workaround, I can use http://www.google.com/webhp?hl=en
, which forces Google to answer in HTML.
I was investigating strange "Connection truncated" errors when Subversion clients were connecting through WEBDAV. It turned out Apache's mod_dav_svn and mod_auth_pam do not interoperate very well and results in httpd crashing with SIGALARM and "*** glibc detected *** free(): invalid pointer:" errors in the error_log.
In my case the solution was to switch to mod_auth_ldap and the crashes were gone.
To my surprise I have discovered that whenever I create a PDF from DocBook using docbook2pdf, it fetches DTDs from the Net. This process makes PDF generation slower by 3-4 times. Some Googling resulted in buildDocBookCatalog script which can be downloaded from xmlsoft.org
. It creates a catalog file which contains mapping from URLs to locally cached DTD files.