tag:blogger.com,1999:blog-12683964820108263572024-02-06T22:13:49.315-05:00args[]Software development and other projectsAxlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.comBlogger18125tag:blogger.com,1999:blog-1268396482010826357.post-54671774359129383872016-09-11T08:17:00.000-04:002016-09-11T08:17:49.542-04:00Loan Amortization Schedule Calculator I wanted a loan amortization schedule that would give a monthly breakdown of savings from extra principal payments made during the repayment stream and I also wanted it to be mobile friendly. I haven't seen any that meet my 1st requirement, and many don't meet the 2nd. So I wrote a little javascript program to compute loan amortization schedules, <a href="https://www.allensw.com/loancalc/">https://www.allensw.com/loancalc/</a>.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-66408240642659162652016-09-10T08:29:00.000-04:002016-09-10T08:29:20.384-04:00Ingress Packet Shaping (with CoDel)<p>Until recently I have been using a Raspberry Pi as my router and firewall, which worked reasonably well, though some useful features were lacking from the kernel. Within the past few months, my ISP increased the bandwidth of my plan and my pi could no longer keep up. So I upgraded my router to an "old" intel core i5 laptop. With that came an up to date linux kernel with it's full suite of packet shaping tools.</p>
<p>I have always used packet shaping on my router to help keep latency low. Low latency can be achieved under heavy bandwidth utilization by sending a little slower than the service plan allows, thus keeping modem and router buffers relatively empty. Latency increases when buffers fill up with data packets. This is known as <a href="https://en.wikipedia.org/wiki/Bufferbloat">bufferbloat</a>. It is most effective to shape outbound traffic to the internet simply because I have full control over how fast I send data down the pipe to my ISP. Packet shaping, or rather policing downstream traffic is not as easy because I really have no control over how fast servers send data to me. Generally the best you can do is drop some packets to force the TCP layer to just slow down. In the past I have found that in order to avoid high latency during heavy downstream utilization required setting the ingress policing filter rate to a value substantially lower than my available bandwidth, say 75% of my available downstream bandwidth. Obviously this means I could never utilize the link's full capacity. Additionally, a single heavy downstream connection never seemed to utilize the maximum configured ingress rate. These were problems I wanted to solve with my new router setup.
</p>
<p>
Usually the only way you can limit the flow of downstream data is with an ingress policing filter with a rate limit, dropping packets when the preset data rate is exceeded. The more advanced packet shaping methods are only available as egress (upload) filters. However the linux kernel provides the Intermediate Functional Block device (IFB) to help with using advanced packet shaping methods with ingress data. When setup, it is an intermediate device that you can funnel ingress data to, and shape it as egress data. The Pi's kernel lacked the IFB device, making it difficult to shape inbound traffic.
</p>
<p>I tried a few different packet shaping filters when setting up the IFB device, none of them worked as well as I had hoped, until I tried <a href="https://en.wikipedia.org/wiki/CoDel">CoDel</a>. I mostly followed the instruction found on <a href="https://wiki.gentoo.org/wiki/Traffic_shaping">this Gentoo traffic shaping post</a>, though modified for my needs. The code below is the script I have been using for a while which seems to work well. A GitHub repository can be found at <a href="https://github.com/axlecrusher/packetshaping">https://github.com/axlecrusher/packetshaping</a>
</p>
<div class='code'>
echo 65535 > /proc/sys/net/ipv4/netfilter/ip_conntrack_max
modprobe sch_fq_codel
modprobe ifb
ifconfig ifb0 up
mark()
{
$2 -j MARK --set-mark $1
$2 -j RETURN
}
OUTRATE=3300
#OUTRATESPLIT=666
OUTRATESPLIT=1000
#DOWNMAX=35000
DOWNMAX=33
DOWNRATE=31500
#MTU=1454
MTU=1500
iptables -t mangle -N TOINTERNET
iptables -t mangle -N FROMINTERNET
#iptables -t mangle -F
iptables -t mangle -A PREROUTING -i eth0 ! -d 192.168.0.0/16 -j TOINTERNET
iptables -t mangle -A PREROUTING -i eth1 ! -s 192.168.0.0/16 -j FROMINTERNET
#iptables -t mangle -A PREROUTING -i eth1 -j IMQ --todev 0
iptables -t mangle -A FORWARD -o eth1 -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1400:65495 -j TCPMSS --clamp-mss-to-pmtu
#CrashPlan, need tc to check tos = 0x04
mark "0x40" "iptables -t mangle -A TOINTERNET -s 192.168.1.250 -p tcp --dport 443"
#interactive
mark "0x20" "iptables -t mangle -A TOINTERNET -p udp --dport 9000:9010" #adam game
#from internet
mark "0x10" "iptables -t mangle -A FROMINTERNET -p udp --sport 9000:9010" #adam game
#iptables -t mangle -L -v -n
#tc qdisc del dev imq0 root 2> /dev/null > /dev/null
tc qdisc del dev eth1 root 2> /dev/null > /dev/null
tc qdisc del dev eth1 ingress 2> /dev/null > /dev/null
tc qdisc del dev ifb0 root 2> /dev/null > /dev/null
tc qdisc del dev ifb0 ingress 2> /dev/null > /dev/null
#default traffic queue 30
tc qdisc add dev eth1 root handle 1: htb default 30
tc class add dev eth1 parent 1:0 classid 1:1 htb rate ${OUTRATE}kbit ceil ${OUTRATE}kbit burst 75k
tc class add dev eth1 parent 1:1 classid 1:10 htb rate ${OUTRATESPLIT}kbit ceil ${OUTRATE}kbit mtu ${MTU} prio 1
tc class add dev eth1 parent 1:1 classid 1:20 htb rate ${OUTRATESPLIT}kbit ceil ${OUTRATE}kbit mtu ${MTU} prio 2
tc class add dev eth1 parent 1:1 classid 1:30 htb rate ${OUTRATESPLIT}kbit ceil ${OUTRATE}kbit mtu ${MTU} prio 3
tc class add dev eth1 parent 1:1 classid 1:40 htb rate 58kbit ceil ${OUTRATE}kbit mtu ${MTU} prio 4
tc qdisc add dev eth1 parent 1:10 handle 10: sfq perturb 10 limit 43 #average packet size of 83 bytes
tc qdisc add dev eth1 parent 1:20 handle 20: sfq perturb 10 limit 5 #average packet size 52 bytes, 30ms buffer
tc qdisc add dev eth1 parent 1:30 handle 30: sfq perturb 10 limit 5 #average packet size 122 bytes, 35ms buffer
tc qdisc add dev eth1 parent 1:40 handle 40: sfq perturb 10 limit 70
#####first filter to match wins
#icmp
tc filter add dev eth1 parent 1:0 protocol ip prio 10 u32 match ip protocol 1 0xff flowid 1:10
#DNS
tc filter add dev eth1 parent 1:0 protocol ip prio 10 u32 match ip protocol 17 0xff match ip dport 53 0xffff flowid 1:10
tc filter add dev eth1 parent 1:0 protocol ip prio 10 u32 match ip protocol 6 0xff match u8 0x12 0xff at nexthdr+13 flowid 1:10 #SYN,ACK
tc filter add dev eth1 parent 1:0 protocol ip prio 10 u32 match ip protocol 6 0xff match u8 0x02 0xff at nexthdr+13 flowid 1:10 #SYN
tc filter add dev eth1 parent 1:0 protocol ip prio 10 u32 match ip protocol 6 0xff match u8 0x11 0xff at nexthdr+13 flowid 1:10 #FIN,ACK
tc filter add dev eth1 parent 1:0 protocol ip prio 10 u32 match ip protocol 6 0xff match u8 0x01 0xff at nexthdr+13 flowid 1:10 #FIN
tc filter add dev eth1 parent 1:0 protocol ip prio 10 u32 match ip protocol 6 0xff match u8 0x14 0xff at nexthdr+13 flowid 1:10 #RST,ACK
tc filter add dev eth1 parent 1:0 protocol ip prio 10 u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 match u8 0x10 0xff at 33 match u16 0x0000 0xffc0 at 2 flowid 1:20
tc filter add dev eth1 parent 1:0 protocol ip prio 11 u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 match u8 0x10 0xff at 33 match u16 0x0000 0xff00 at 2 flowid 1:30
tc filter add dev eth1 parent 1:0 protocol ip prio 12 handle 0x10 fw flowid 1:10
tc filter add dev eth1 parent 1:0 protocol ip prio 13 handle 0x20 fw flowid 1:20
tc filter add dev eth1 parent 1:0 protocol ip prio 14 handle 0x30 fw flowid 1:30
tc filter add dev eth1 parent 1:0 protocol ip prio 15 handle 0x40 fw flowid 1:40
#shape inbound traffic
tc qdisc add dev eth1 handle ffff: ingress
tc filter add dev eth1 parent ffff: protocol all u32 match u32 0 0 action mirred egress redirect dev ifb0
tc qdisc add dev ifb0 root handle 1: tbf rate ${DOWNMAX}mbit burst 40k latency 30ms
tc qdisc add dev ifb0 parent 1: fq_codel
#tc -s -d class show dev eth1
</div>
Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com1tag:blogger.com,1999:blog-1268396482010826357.post-52786223485502327302014-03-31T16:56:00.003-04:002014-04-01T07:17:22.661-04:00Homebrew USB Sound Card <p>This project began out of necessity as I was working on a Raspberry Pi project a that required low latency audio output. However I was having major issues with latency from the the Pi's sound card, observing at least 20-30ms of latency. This would not do, I needed real time responsiveness, less than 5ms. With the help of <a href="https://www.youtube.com/channel/UCG7yIWtVwcENg_ZS-nahg5g">CNLohr</a> I began to try and build my own low latency USB sound card.</p>
<p>Here is the result... <br/><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6mSA1JjQt7FwH6ttqCS6W2pKyQVlaeNUo4YHbyqkrw5IvVnJTLDCvpSNo1gMSTXAQnR9C2MIvmJ1ne9bEq1D-zkZh_Gp7X_SJ3iIRpTx6ZQmUdrydZgUQlxbxEbMCG13ktpq_W94esk7l/s1600/DSCN1316.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6mSA1JjQt7FwH6ttqCS6W2pKyQVlaeNUo4YHbyqkrw5IvVnJTLDCvpSNo1gMSTXAQnR9C2MIvmJ1ne9bEq1D-zkZh_Gp7X_SJ3iIRpTx6ZQmUdrydZgUQlxbxEbMCG13ktpq_W94esk7l/s200/DSCN1316.JPG" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNqAtnW6iqUmI0DyglsBQr2Qal1exKKwOM_GcF_Vl7KpRhMMksOmY3nhosisD0eKNjK_hk0jgrNol-hUskswVBlEt8YcUbYg6YXehSF5w9Akk1Yt-I4NFsg8maBVkROxBI7UVKMyNttIVu/s1600/DSCN1317.JPG" imageanchor="1" style="clear: left; float: right; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNqAtnW6iqUmI0DyglsBQr2Qal1exKKwOM_GcF_Vl7KpRhMMksOmY3nhosisD0eKNjK_hk0jgrNol-hUskswVBlEt8YcUbYg6YXehSF5w9Akk1Yt-I4NFsg8maBVkROxBI7UVKMyNttIVu/s200/DSCN1317.JPG" /></a></div></p>
<p>CNLohr designed the PCB and I programmed the firmware. The sound card uses a MAX5556 digital to analog converter (DAC), a Mega8u2, and an Attiny44. I made a few modifications to the PCB to make it processing audio data more efficient and to fix a design flaw.</p>
<p>An attiny44 is used to drive the MAX5556 DAC using the 3 wire I²S interface. The attiny is running at 24.5Mhz which, with some clever use of the microprocessor, can feed the DAC with 48khz 16bit stereo data. I use the tiny's hardware timer to toggle the PA7 pin every 256 clock ticks, which drives the DAC's left/right clock. An assembly loop is used to toggle the DAC's serial clock and sdata pin. This loop is synchronized with the left/right clock (PA7). An assembly interrupt is used to copy data out of the attiny's SPI registers into memory. In an effort to minimize the time the SPI interrupt takes, I had to minimize the the amount of work done. Within the assembly, I carefully selected registers so that I could avoid having to push or pop registers in the event of an SPI interrupt.</p>
<p>The AT Mega8u2 is used to communicate with the USB host and the AT tiny. The Mega8u32 is clocked at 16Mhz, a limitation imposed by USB. Audio is streamed into a circular buffer from the USB host. The circular buffer is emptied over the SPI interface to the AT Tiny microprocessor. Assembly is used to send the data over SPI to make use of the avr's store and increment assembly instruction, saving many clock cycles. The Mega8u32 and the AT tiny are kept in sync by using an interrupt on the PC7 pin. This pin is, like the DAC's left/right clock, connected to the tiny's PA7 pin. A rising edge on the PC7 pin signals that it is time to begins ending new data to the at tiny.</p>
<p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvzGOI_5aL5DM-DZYBEIWBj0C5Dn-juanPED5hG7v1AHeB4TAR9p5LXm3ef75pYHNluXFajdTrU6jkPUBUeqKrxbKZ42bZPj21DAW8B337bLGqv21tWDeZesQftVryR7L8HIhShDTRu0kx/s1600/DSCN1327.JPG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvzGOI_5aL5DM-DZYBEIWBj0C5Dn-juanPED5hG7v1AHeB4TAR9p5LXm3ef75pYHNluXFajdTrU6jkPUBUeqKrxbKZ42bZPj21DAW8B337bLGqv21tWDeZesQftVryR7L8HIhShDTRu0kx/s200/DSCN1327.JPG" /></a></div>A quickly hacked together topside program sends raw stereo data to the sound card. The internal buffer of the sound card is about 64 samples per channel, about 1.3ms. USB double buffering provides an additional ~.6ms of buffer. The sound card works pretty well from a PC. Occasionally the sound card will experience an inaudible buffer underflow. A red LED will flicker on the sound card when this is detected. Using this sound card on the Raspberry Pi is a completely different story. The Pi isn't capable of servicing the USB fast enough to to keep up. Buffer underflow occurs constantly, completely useless.</p>
<p><iframe width="420" height="315" src="//www.youtube.com/embed/i6TJSVomcYY" frameborder="0" allowfullscreen></iframe></p>
<p>The source code is available at <a href="https://github.com/axlecrusher/AvrProjects/tree/master/soundCard">https://github.com/axlecrusher/AvrProjects/tree/master/soundCard</a></p>
<p>Slightly outdated schematics can be found here. <a href="https://svn.cnlohr.net/pubsvn/electrical/avr_soundcard/">https://svn.cnlohr.net/pubsvn/electrical/avr_soundcard/</a> It is lacking the modifications visible in my photos.</p>
<p><b>Edit:</b>Added note about double buffered USB.</p>Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-77675533217104468722014-03-01T14:08:00.001-05:002014-03-01T14:22:11.613-05:00Mdadm External Write-intent Bitmap Hack (Debian update)<p>I recently upgraded to Debian 7.4 and found that I needed to redo my <a href="http://axlecrusher.blogspot.com/2012/09/mdadm-external-bitmap-boot-hack.html">external write intent bitmap hack</a>. My old methods no longer worked.</p>
<div>
First I needed to disable mdadm assembly when running from the ramdisk. Edit /etc/default/mdadm and set the following:
<div class="code">INITRDSTART='none'</div>
<br/>
Rebuild the ramdisk image with:
<div class="code">update-initramfs -u</div>
<br/>
You still need to prevent mdadm from assembling the array listed in /etc/mdadm/mdadm.conf so that
<div class="code">DEVICE /dev/null</div>
<br/>
I created a new mdadm config file names /etc/mdadm/mdadm.delayed.conf which is a copy of /etc/mdadm/mdadm.conf but leaving
<div class="code">DEVICE partitions</div> as is. I also specify the bitmap file on the ARRAY definition line
<div class="code">ARRAY /dev/md/127 metadata=0.90 bitmap=/md127-raid5-bitmap UUID=....</div>
<br/>
Next I created a new script /etc/init.d/delayedRaid
<div class="code">#!/bin/sh
#
# Start all arrays specified in the delayed configuration file.
#
# Copyright © 2014 Joshua Allen <josh@allensw.com>
# Distributable under the terms of the GNU GPL version 2.
#
### BEGIN INIT INFO
# Provides: delayedRaid
# Required-Start: $local_fs mdadm-raid
# Should-Start:
# X-Start-Before:
# Required-Stop:
# Should-Stop: $local_fs mdadm-raid
# X-Stop-After:
# Default-Start: S
# Default-Stop: 0 6
# Short-Description: Delayed MD array assembly
# Description: This script assembles delayes assembly of MD raid
# devices. Useful for raid devices that use external
# write intent bitmaps.
# Settings are in /etc/mdadm/mdadm.delayed.conf
### END INIT INFO
. /lib/lsb/init-functions
do_start()
{
log_action_begin_msg "Starting delayed raid"
mdadm --assemble --scan --config=/etc/mdadm/mdadm.delayed.conf
log_action_end_msg $?
mount /mnt/raid5
}
do_stop()
{
umount /mnt/raid5
mdadm --stop --scan --config=/etc/mdadm/mdadm.delayed.conf
}
case "$1" in
start)
do_start
;;
restart|reload|force-reload)
echo "Error: argument '$1' not supported" >&2
exit 3
;;
stop)
# No-op
do_stop
;;
*)
echo "Usage: delayedRaid [start|stop]" >&2
exit 3
;;
esac
</div>
<br/>
And I added to the start-up procedures with
<div class="code">insserv -d delayedRaid</div>
<br/>
After reboot check to see if
<div class="code">Intent Bitmap : {some file name}</div>
is present when running
<div class="code">mdadm --detail /your/raid/device</div>
</div>
<p>Hopefully I didn't miss anything.</p>
Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-87930138034114913462013-01-25T18:48:00.001-05:002016-03-31T21:28:20.852-04:00Install djbdns on Raspberry Pi<h2>
Install djbdns on Raspberry Pi</h2>
<div>
djbdns is a small, fast, and secure DNS server. Perfect for low resource systems. I also find it easier to configure than BIND (once you understand how).</div>
I start with a raspbian image from <a href="http://www.raspberrypi.org/downloads">http://www.raspberrypi.org/downloads</a><br />
<br />
<div>
Install some packages that D. J. Bernstein says that we need.
<br />
<div class="code">
apt-get install ucspi-tcp
apt-get install daemontools</div>
</div>
<div>
<br />
Don't install tinydns. It includes a pop3 server.<br />
Install djbdns following <a href="http://cr.yp.to/djbdns/install.html">http://cr.yp.to/djbdns/install.html</a></div>
<br />
Create some users and groups that we will need for executing the dnscache and multilog.<br />
<div class="code">
useradd svclog
useradd dnscache</div>
<br />
Create the /etc/dnscache folder structure<br />
<div class="code">
dnscache-conf dnscache svclog /etc/dnscache</div>
<br />
Setup /service directory, svscan looks at this directory to see which services to run.<br />
<div class="code">
mkdir /service
ln -s /etc/dnscache /service/dnscache</div>
<br />
Add the following to /etc/rc.local so that the supervised services start on boot.<br />
<div class="code">
/usr/bin/svscanboot &</div>
<br />
svscanboot also needs the following link to function correctly.<br />
<div class="code">
ln -s /service/ /etc/service</div>
<br />
<h4>
Optional Things</h4>
Update /etc/dnscache/env/IP to contain the ip address to listen on. Also create a file entries in /etc/dnscache/root/ip to specify the networks that the dns server should reply to.<br />
<br />
Edit /etc/dnscache/log/run adding s52428800 before ./main to set the log size to 50MB.<br />
It should look something like<br />
<div class="code">
exec setuidgid svclog multilog t s52428800 ./main</div>
<br />
You should update the root server list
<br />
<div class="code">
wget http://www.internic.net/zones/named.root -O - | grep ' A ' | tr -s ' ' | cut -d ' ' -f4 > /etc/dnscache/root/servers/\@</div>
<br />
Update /etc/resolv.conf to use your new dns server.
<br />
<br />
<div>
It is also a good idea to apply some cname patches. <a href="http://homepage.ntlworld.com/jonathan.deboynepollard/Softwares/djbdns/#dnscache-cname-handling">http://homepage.ntlworld.com/jonathan.deboynepollard/Softwares/djbdns/#dnscache-cname-handling</a> </div>
<div>
Change UDP packet size to accommodate big UDP packets. Many DNS servers require large UDP packets or djbdnscache will fail with drop # input / output errors. <a href="https://dev.openwrt.org/browser/packages/net/djbdns/patches/060-dnscache-big-udp-packets.patch">https://dev.openwrt.org/browser/packages/net/djbdns/patches/060-dnscache-big-udp-packets.patch</a></div>
<h4>
Resources</h4>
<a href="http://cr.yp.to/djbdns/dnscache.html">http://cr.yp.to/djbdns/dnscache.html</a><br />
<a href="http://cr.yp.to/daemontools/multilog.html">http://cr.yp.to/daemontools/multilog.html</a><br />
<a href="http://cr.yp.to/daemontools/supervise.html">http://cr.yp.to/daemontools/supervise.html</a><br />
<a href="http://tinydns.org/">http://tinydns.org/</a>Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com2tag:blogger.com,1999:blog-1268396482010826357.post-9343897406469632742012-09-22T00:53:00.002-04:002014-03-01T14:18:43.108-05:00mdadm external bitmap boot hack<p>
<strong>Update 3/1/2014:</strong> I have <a href="http://axlecrusher.blogspot.com/2014/03/mdadm-external-bitmap-boot-hack-debian.html">updated my procedure for Debian 7.4</a>.
</p>
<div>
Linux's mdadm gives you the ability to use a write intent bitmap, which helps speed resync if a drive fails and is then re-added. There are two bitmap storage options, internal and external. The internal storage stores the bitmap on the raid array disks and can slow writing significantly. External storage uses a file on an ext2 or ext3 file system. It can be quicker than internal storage during writing but causes big problems during boot.<br />
<br />
In order for the bitmap to function, it must be writable at the time that mdadm assembles the array. If it is not writable, mdadm will fail to start the array. At the time that mdadm assembles the array, typically no partitions are writable. My solution to this was to shift the array assembly to some time period after mountall (mounts the file systems found in fstab) had executed.<br />
<br />
First I prevent any raid partitions from mounting during boot, noauto in fstab.<br />
<br />
I copied /etc/mdadm/mdadm.conf to /etc/mdadm/mdadm.manual.conf. Then I edited /etc/mdadm/mdadm.conf changing <span class="code">DEVICE partitions</span> to <span class="code">DEVICE /dev/null</span>. This will make mdadm scan for raid partitions within the null device, in which it will find none. Raid devices will no longer assemble during the boot process.<br />
<br />
Then I created a new script in /etc/rcS.d/ named S02mountRaid (don't forget execute permissions). The script contains the following lines.<br />
<div class="code">
mdadm --assemble --scan --config=/etc/mdadm/mdadm.manual.conf
mount /mnt/raid5
</div>
This will cause mdadm to scan the copy of our mdadm.conf which we did not modify. Mdadm will correctly assemble the raid devices found in that file along with assigning the external bitmap to the proper array. The raid device is then mounted.<br />
<br />
This script runs at every run level, and runs before any other init.d script. Future work, I need to find a way to make init.d wait for script completion.
</div>Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-15658903845418040082012-05-04T23:27:00.004-04:002013-10-10T08:49:00.003-04:00DRAM TestingFor the past few months I have been working on restoring an old Commodore 64 that someone gave to me. It was missing a few obvious pieces and after making a new AV cable and obtaining a new power supply I found that it wouldn't boot properly. So far I have replace the PLA, VIC, capacitors, and a few other components. But this post is not about those things. This is about testing the Commodore 64's DRAM chips.<br />
<br />
<div>
My particular C64 uses 8 individual RAM chips most are D4164C-15 and a couple are D4164C-2. I replaced all the RAM but I wanted to know if both the original and replacement chips were functioning properly. I decided to test each chip, I needed some hardware to test with.
<br />
<br />
I decided to use a MEGA8U2 AVR microprocessor, mainly because I have one that can plug into a breadboard and it has enough pins to drive the D4164C chips.<br />
<br />
I wired the test setup to try to reduce the instruction count where I could so it is a little messy. Wiring and instruction count could be greatly improved if I didn't need the programming header, but it gets the job done.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp6Gau_FmCCK7UDfYmTmThlf2VwDbqpCwEatCd2LsahZw9AsEs-EAafdVPqsycaj373mCmuabf3CrjexB8TOnh1M2HwbATz51BnqnCuV6HtLZBHaYth8QvdfCOmrF2Ur9CXAwyqYg3M4z3/s1600/DSCN0988.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp6Gau_FmCCK7UDfYmTmThlf2VwDbqpCwEatCd2LsahZw9AsEs-EAafdVPqsycaj373mCmuabf3CrjexB8TOnh1M2HwbATz51BnqnCuV6HtLZBHaYth8QvdfCOmrF2Ur9CXAwyqYg3M4z3/s320/DSCN0988.JPG" width="320" /></a></div>
<br />
<br />
At this point I realized I had no idea how to control DRAM. My friend <a href="http://www.youtube.com/user/CNLohr">CNLohr</a> gave me a quick explanation of how DRAM works (basically you have to refresh it within time period and this ram appears to be 256 rows by 256 columns). Ok...<br />
<br />
The first thing to do was to try to write and read just one bit. The datasheet for the DRAM provided timing windows charts for each step required when perform all of the operations the memory is able to do. After a few hours of stepping through the charts, coding, re-coding, reviewing the charts, and sometimes just trial and error, I finally was able to write and read 1 bit from memory. After a couple more days of work I was reading and writing to the entire memory module. (I forget exactly what made this take so long to accomplish. Some kind of bug in my program.)<br />
<br />
I constructed a couple of routines to test both the wiring and the memory. The tests are largely based off of information found at <a href="http://www.ganssle.com/testingram.htm">http://www.ganssle.com/testingram.htm</a>. There's a lot of good information there for developing ram tests.<br />
<br />
To test the wires I wrote 0 to the first bit of the memory followed by a 1 to a power of 2 memory location (high on just 1 wire). I then read memory location zero and if the value is no longer 0, it indicates a failure on a specific address wire.<br />
<br />
I used a walking one algorithm with a bit inversion to test all the memory cells. The goal is to toggle as many bits as possible.<br />
<br />
In either case if there's an error, the red LED would turn off forever. While the test is running, the LED will blink at the end of each complete cycle.<br />
<br />
I was able to test all the memory modules I had replaced. They were all functioning properly.<br />
<br />
<b>Update 5/5/2012:</b><br />
The source code can be found at <a href="https://github.com/axlecrusher/AvrProjects/tree/master/avr_dramTest">https://github.com/axlecrusher/AvrProjects/tree/master/avr_dramTest</a></div>Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com23tag:blogger.com,1999:blog-1268396482010826357.post-40892178846929611822011-10-15T13:21:00.029-04:002012-09-15T21:10:16.334-04:00Recovering RAID5 from Multiple Simultaneous Drive FailuresRAID5 is a redundant disk system that protects against a single drive failure. The array can keep functioning allowing you the defective disk and rebuild the raid without any data loss. However if more than one disk fails at a time, RAID5 will not help you (there are other raid levels that can). Sudden multiple disk failure is exactly what happened to my system one night.<br />
<br />
<b>Edit (9/15/2012):</b> I added additional information at the bottom of this post which makes the re-assembly process easier. I recommend it rather than the --create procedure detailed below. It is still a good idea to read the entire post though.<br />
<br />
Once or twice in the past I have had a single drive fail because of a lose or fault SATA cable. This is easily resolved by powering down the computer and re-securing the cable. I usually notice a drive failure within a week (I should setup and alert system). But recently, I had two drives fail within two hours of each other. I hadn't even noticed the first drive failure before the second had drive failed. Rebooting the computer cleared up the SATA errors that brought the drives down. The drives seemed to be function properly, they hadn't suffered a hardware failure. However, the raid could not rebuild itself because linux had marked both drives as faulty. At this point in time I had 6TB of data at risk, with partial backups several months old. I was mostly worried about photos that I had taken over the past several months that can't be replaced.<br />
<br />
So what to do... try not to panic, this is going to get messy.<br />
<br />
I began by trying to figure out which drives failed and in which order by issuing <span class="code">mdadm --examine</span> for every device in the array. I focused on the last portion of the output, which contains the status of each device. The data is recorded independently on each device in the raid, so you can compare the output and find differences. In a properly functioning raid the output should be identical for each device. Below is the output for the /dev/sda1 device.<br />
<div class="code">
<br />
Number Major Minor RaidDevice State<br />
this 3 8 81 3 active sync /dev/sdf1<br />
<br />
0 0 8 17 0 active sync /dev/sdb1<br />
1 1 0 0 1 faulty removed<br />
2 2 0 0 2 faulty removed<br />
3 3 8 81 3 active sync /dev/sdf1<br />
4 4 8 97 4 active sync /dev/sdg1</div>
<br />
<br />
Knowing that I lost 2 drives, I figured that this drive had not failed simply because the failures were recorded on this disk.<br />
<br />
Continuing, I eventually found the second drive that failed because it only had a record of one drive failure. Thus drive must have been functioning during the first failure but not the 2nd.<br />
<div class="code">
Number Major Minor RaidDevice State<br />
this 1 8 33 1 active sync /dev/sdc1<br />
<br />
0 0 8 17 0 active sync /dev/sdb1<br />
1 1 8 33 1 active sync /dev/sdc1<br />
2 2 0 0 2 faulty removed<br />
3 3 8 81 3 active sync /dev/sdf1<br />
4 4 8 97 4 active sync /dev/sdg1</div>
<br />
<br />
Then I found the first drive that failed because there was no failure recorded on it at all.<br />
<div class="code">
<br />
Number Major Minor RaidDevice State<br />
this 2 8 49 2 active sync /dev/sdd1<br />
<br />
0 0 8 17 0 active sync /dev/sdb1<br />
1 1 8 33 1 active sync /dev/sdc1<br />
2 2 8 49 2 active sync /dev/sdd1<br />
3 3 8 81 3 active sync /dev/sdf1<br />
4 4 8 97 4 active sync /dev/sdg1</div>
<br />
<br />
Note: I think you can also use the "Update Time" from mdadm --examine to figure this information out. I used it to verify that my logic was correct.<br />
<br />
Important Note: My computer likes to move SATA devices around at boot time. So the device names listed in the raid status outputs were not accurate after rebooting. The above mdadm output says /dev/sdd1 had failed, but the device name I queried mdadm for was /dev/sdf1. You MUST match the current device with the array device number. The correct device order is essential for fixing the raid.<br />
<br />
Now that I knew the devices and the order in which they failed I could do a little more thinking.<br />
<br />
I figured since linux halted the file system and stopped the raid when the 2nd device failed there shouldn't be too much data corruption. Probably only the data that was being written to disk near the time of failure. This data wasn't too important. The last important data had been written a few days earlier so t should have been flushed from the caches to disk. With these assumptions I decided that it should be possible to just tell the array that only the 1st failed drive is broken and that the second is OK. Apparently you can't really do this. The only way to do it is to destroy the array and rebuild it.<br />
<br />
Thats right, <span style="font-weight: bold;">destroy</span> and then rebuild the array. Pray for <a href="http://hyperboleandahalf.blogspot.com/2010/06/this-is-why-ill-never-be-adult.html">all the datas</a>.<br />
<br />
I googled around for a while to try to see if my idea was possible, it seemed to be. The best validation for my idea came from <a href="http://blog.al4.co.nz/2011/03/recovering-a-raid5-mdadm-array-with-two-failed-devices/">this blog</a>.<br />
<br />
The rebuilding process...<br />
<br />
The first step was to stop the raid device <span class="code">mdadm --stop</span> before I could start destroying and re-creating it. If you don't do this, you get strange errors from mdadm saying it can't write to the devices. It was aggravating to figure out why so just do it.<br />
<br />
I decided I needed to protect myself from myself and possibly from mdadm. I wanted to make sure there was no chance that I would accidently rebuild the array using the first failed (most out of sync) drive. I zeroed the device's raid superblock. <span class="code">mdadm --zero-superblock /dev/sdf1</span>. Now it is no longer associated with any raid device.<br />
<br />
Next I used the output from the <span class="code">mdadm --examine</span> commands to help me construct the command to rebuild the raid.<br />
<div class="code">
mdadm --verbose --create --metadata=0.90 /dev/md0 --chunk=128 --level=5 --raid-devices=5 /dev/sdd1 /dev/sde1 missing /dev/sda1 /dev/sdb1</div>
<br />
<br />
IMPORTANT: Notice that the device order is not the same order as listed in the <span class="code">mdadm --examine</span> output. This is because my computer moves the SATA devices around. It is CRITICAL that you rebuild array with the devices in the proper order. Use the array device number for "this" device from the output of the <span class="code">mdadm --examine</span> commands to help you order the devices correctly.<br />
<br />
I specified the chunk size using the value from <span class="code">mdadm --examine</span>. I found I also had to specify the meta data version. Mdadm by default used a newer meta data version, which altered the amount of space on each device. The space used for the rebuild needed to be exactly the same as the original array setup otherwise the data stripes won't line up (and your data will be munged). You can rebuild the array as many times as you like so long as you don't write data to the broken array setup. I rebuilt my array 3 or 4 times before I got it right.<br />
<br />
To check if the array setup was correct I ran <span class="code">e2fsck -B 4096 -n /dev/md0</span> (linux scan disk utility). I decided it was safer to specify the file system block size to make sure e2fsck got it right. Since I am just testing the array setup, I didn't want e2fsck making any changes to the disk hence the -n. If the array setup is incorrect the striped data won't line up and e2fsck won't find any superblock and will refuse to scan. If e2fsck is able to preform a scan, then the array setup must be OK (at least that's the hope).<br />
<br />
The next part is probably the most dangerous part because at this point you are editing the file system data on the disks.<br />
<br />
Once I was sure the array setup was correct I ran <span class="code">e2fsck -B 4096 /dev/md0</span> to fix all the file system errors. There were thousands of group errors, a few dozen inode errors, and a lot of bitmap errors. The wait was nerve racking but eventually it finished. I was able to mount the file system as read only (for safety), and list the files, I was even able to open a picture.<br />
<br />
Lastly I added the first failed drive back into the array <span class="code">mdadm -a /dev/md0 /dev/sdf1</span> and the array began rebuilding the parity bits.<br />
<br />
At this point I began dumping all important data to another drive just to have a current backup. Once the parity bits have been rebuilt I will remount the partition as read-write.<br />
<br />
That's it, RAID5 recovered from multiple drive failure.<br />
<br />
<b>Edit (9/15/2012):</b> I found another page that has helpful advice. <a href="http://zackreed.me/articles/50-recovery-from-a-multiple-disk-failure-with-mdadm">http://zackreed.me/articles/50-recovery-from-a-multiple-disk-failure-with-mdadm</a> Instead of using --create, you can use --assemble --force with the drives that you want mark as clean and use. This will assemble the array in degraded mode with the devices in the correct order. You can then zero the super block of the 1st failed drive and then --add it to the array.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com1tag:blogger.com,1999:blog-1268396482010826357.post-23418980740690017862011-02-02T18:44:00.014-05:002011-02-02T21:41:58.477-05:00Verizon DSL Throttling Video DataIt appears Verizon DSL is throttling video data. My roommate and I noticed a few days ago that videos were playing terribly. It didn't matter what site it was streaming from, youtube, hulu, etc, all videos were buffering extremely slowly. There was no abnormal bandwidth usage at the time. Today I decided to do some testing...<br /><br />Originally I used Chrome for the test, but repeated it in Firefox so that I could get accurate timing data.<br /><br />Here is my test:<br /><br />I connected to my encrypted VPN service. I started Firefox and used private mode to avoid caching. I loaded a video from youtube. Below is a screen shot of the load time for the video while using Verizon DSL with an encrypted VPN. Notice it took 30 seconds to load 2.9 MB of video.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZKm-VkU3_3XfAYQbloao9e_vj_Ha41_wfDHFv5wWxKh8uIUir_rJ_CiWTDNvCBfv8JkNz8-D-uCvQV7_OC0_Iz_ovh7ouGecy3WgyaHE9nrmXZwWVTaZOsJvQfZ64ZWqOOrzzFUCAfLaX/s1600/VpnStream.png"><img style="cursor:pointer; cursor:hand;width: 368px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZKm-VkU3_3XfAYQbloao9e_vj_Ha41_wfDHFv5wWxKh8uIUir_rJ_CiWTDNvCBfv8JkNz8-D-uCvQV7_OC0_Iz_ovh7ouGecy3WgyaHE9nrmXZwWVTaZOsJvQfZ64ZWqOOrzzFUCAfLaX/s400/VpnStream.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5569247757171525522" /></a><br /><br />Next, I closed the browser and disconnected from the VPN. I started firefox in private mode again (to avoid caching) and loaded the same video. This time it took 1 minute and 57 seconds to load 2.9MB of video! This is 4 times slower! Verizon is clearly throttling the video. Image below.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9B81otcoI8t_9lL6XxTF3K2zxzGClUuzbkWkR4KbR0CQVzmMW8VWRfyDr9q6jmmdHyaeQq54XfVBJ48scpdc73OI1KBhq1tfn3MgIfKL7HL2cnhJzlJ2qJUP2pIU727WNZGvf3psbjZ0r/s1600/VerizonThrottle.png"><img style="cursor: pointer; width: 368px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9B81otcoI8t_9lL6XxTF3K2zxzGClUuzbkWkR4KbR0CQVzmMW8VWRfyDr9q6jmmdHyaeQq54XfVBJ48scpdc73OI1KBhq1tfn3MgIfKL7HL2cnhJzlJ2qJUP2pIU727WNZGvf3psbjZ0r/s400/VerizonThrottle.png" alt="" id="BLOGGER_PHOTO_ID_5569246683593872722" border="0" /></a><br /><br />I urge people using Verizon (or any ISP) to perform their own tests. Make these incidents known!<br /><br />EDIT: At this point I have only tested with youtube. I may test with other sites in the future. Subjectively, Hulu seems ok today.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-15794359921145895262010-01-29T06:39:00.005-05:002010-08-14T12:12:50.891-04:00flac2mp3Being unsatisfied with other flac to mp3 conversion tools out there, I wrote a quick perl script to get the job done. It even handles ID3 tags. Lack of this feature was my problem with other tools.<br /><br />The command would be run by ./flac2mp3 *.flac<br /><br /><div class="code"><br />#!/usr/bin/perl<br /><br />foreach $argnum (0 .. $#ARGV)<br />{<br /> my $flac = $ARGV[$argnum];<br /> $flac =~ s/.flac$//;<br /> $flac =~ s/"/\\"/g; #escape "<br /><br /> my $tagdata = `metaflac --export-tags-to=- \"$flac.flac\"`;<br /><br /> $tagdata =~ s/"/\\"/g; #escape "<br /><br /># print "$tagdata\n"; <br /> $tagdata =~ s/TITLE=/--tt "/;<br /> $tagdata =~ s/ALBUM=/--tl "/;<br /> $tagdata =~ s/ARTIST=/--ta "/;<br /> $tagdata =~ s/GENRE=/--tg "/;<br /> $tagdata =~ s/TRACKNUMBER=/--tn "/;<br /> $tagdata =~ s/DATE=.*(\d{4}).*/--ty "$1/;<br /># $tagdata =~ s/DISCNUMBER=/--tv cd="/;<br /><br /><br /> $tagdata =~ s/\n/" \n/gm; #add " and space to the end of every line<br /> $tagdata =~ s/^[^--].*|\n//gm; #remove extra data and make all one line<br /># print "$tagdata\n";<br /><br /># print "flac -d \"$flac.flac\" -o - | lame -V 2 -h $tagdata - -o \"$flac.mp3\"\n";<br /><br /> system("flac -d \"$flac.flac\" -o - | lame -V 1 -h $tagdata - -o \"$flac.mp3\"");<br />}<br /></div>Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-75933556840583003932009-07-21T19:33:00.001-04:002009-07-21T19:34:30.945-04:00Preventing SQL injection (again)Recently I had to update an old PERL program which, when it was originally written, had no sanitation of user input for SQL statements. The user input (from the web) was simply concatenated into SQL statements. This made it very vulnerable to SQL injection.<br /><br />The SQL DBI used in the program did not allow parameterized queries and replacing it with a newer DBI would have required massive logic changes to the program. The solution was to figure out how to properly escape special characters present in the input. This turned out to be pretty simple if the input was surrounded by single quotes within the SQL statement. Assuming this is true, single quotes present in the input can be replace with with two single quotes. This will protect the SQL from injection.<br /><br />Why? ANSI SQL says that a single quote is escaped by inserting an additional single quote directly before it. Escaping single quotes makes it very difficult if not impossible for the input to terminate the SQL string. However, this only works (at least on informix) if the input string is surround by single quotes in the SQL. Input strings surrounded by double quotes can not be escaped.<br /><br />This method, combined with <a href="http://www.perlmonks.org/?node=How%20do%20I%20expand%20function%20calls%20in%20a%20string%3F">expanding function calls within strings</a>, I was able to prevent SQL injection without major DBI and logic changes.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-62407575077095332132009-07-09T20:19:00.004-04:002009-07-10T22:43:42.780-04:00Been a whileIts been a while since my last post and a lot has happened since.<br /><br />Recently (in my spare time) I have been focusing on the second version of my game engine, Mercury.<br />Development is picking up speed and the project is really taking shape. My personal goal is to get the engine functioning enough to make a few short games. The previous version of the engine was successfully used by the UMBC game development club for their year long 3D project. I'll probably start writing about graphics programming more than anything else.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-91787900368667519092009-01-31T09:37:00.007-05:002009-01-31T10:50:44.691-05:00Writing secure SQL applicationsWhen writing applications that make use of SQL, specifically applications that live on the web, security should be a high priority. Unfortunately security usually ends up just an afterthought. In my experience reviewing and maintaining web applications written by others, I have found that they take little to no precaution against SQL injection.<br /><br />SQL injection is the practice of crafting user input to alter the function of a dynamically generated SQL statement. In web based langagues, SQL statements are usually constructed using string concatenation to combine the query statements with the query values. This can lead to very dangerous conditions. Consider the following simple query.<br /><br /><div class="code">select username from user_table where email='myemail@email.com'<br /></div><br />Assuming the email address is inserted into the query using string concatenation it is trivial to alter how the query functions. If I entered my e-mail as:<br /><div class="code"><br />myemail@email.com'; drop table usertable; --<br /></div><br />The resulting query would be:<br /><div class="code"><br />select username from user_table where email='myemail@email.com'; drop table usertable; --'<br /></div><br />This would instruct the SQL server to drop the table (assuming the application has adequate permissions). Of course you can construct any statement you wish to manipulate the SQL server.<br /><br />The usual protection against this type of attack to to escape special characters such as ' and ;. This can help improve security but is not fool proof.<br /><br />Consider the following:<br /><div class="code"><br />select username from user_table where id=123456<br /></div><br />If the user id could be manipulated by the user it would be possible to make the id something like:<br /><div class="code"><br />123456 and 1=(delete from user table where id != 123456)<br /></div><br />The resulting query would be:<br /><div class="code"><br />select username from user_table where id=123456 and 0<(delete from user_table where id != 123456) </div><br />This would instruct the SQL server to delete all users who's id is not 123456. Notice we have not used any special characters so escaping would not help in this situation.<br /><br />Now there is a rather nice solution to these problems, parameterized queries. Parameterized queries allow you to prepare queries and then send in values at execution time.<br /><br />Using the last example, the parameterized query would look like this:<br /><div class="code"><br />select username from user_table where id=?<br /></div><br />Parameters are usually indicated with a ? but may depend on the SQL library.The query is prepared by using something similar to:<br /><br /><div class="code">$query = $db->prepare("select username from user_table where id=?");<br /></div><br />We only need to do that once.<br /><br />Then we can execute it as many times as we want with something similar to:<br /><div class="code"><br />$result = $query->execute("123456");<br /></div><br />The neat thing with this is that the database library will handle inserting the parameters into the query. There is no need to escape special characters using this method. I would argue this using this method makes SQL injection extremely difficult, if not impossible.<br /><br />Using parameterized queries is usually a little more work than just concatenating strings, but the benefits are well worth the extra effort.<br /><br />I have used parameterized queries using both PERL and PHP with Informix and mySQL databases. The PERL module is DBI, the php class is mysqli. They function differently but the concept is the same.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-13202082992480743622008-11-16T19:32:00.012-05:002009-02-01T09:34:55.984-05:00PS3 MP4 Encoding with LinuxAfter a lot of reading, searching, and trial and error, I have come up with a fairly simple way to convert video and audio content into an h264 and aac stream. The stream is packed into an MP4 file and can be streamed to a PS3 via a media server such as MediaTomb.<br /><br />The following scripts makes it a pretty automated process. It relies on mencoder, and MP4Box. I have been using it to convert DVD vob files into smaller MP4 files, with nearly the same quality. I can't actually tell a difference from the original DVD source.<br /><br /><div class="code"><br />#!/bin/bash<br />VIDEOFILTER=crop=704:480:8:0<br />ENCOPTS=subq=5:bframes=4:b_pyramid:weight_b:psnr:frameref=3:bitrate=$2:turbo=1:me=hex:partitions=all:8x8dct:qcomp=0.7:threads=auto<br /><br />mencoder -v \<br /> $1 \<br /> -alang en \<br /> -vf $VIDEOFILTER \<br /> -ovc x264 -x264encopts $ENCOPTS:pass=1:turbo=1 \<br /> -ofps 24000/1001 \<br /> -vobsubout "$3_subtitles" -vobsuboutindex 0 -slang en \<br /> -passlogfile "$3.log" \<br /> -oac copy \<br /> -o /dev/null<br /><br />mencoder -v \<br /> $1 \<br /> -alang en \<br /> -vf $VIDEOFILTER \<br /> -ovc x264 -x264encopts $ENCOPTS:pass=2 \<br /> -oac faac -faacopts object=1:tns:quality=150 \<br /> -passlogfile "$3.log" \<br /> -ofps 24000/1001 \<br /> -o "$3.avi"<br /><br />MP4Box -aviraw video "$3.avi"<br /><br />MP4Box -aviraw audio "$3.avi"<br /><br />mv "$3_audio.raw" "$3_audio.aac"<br /><br />rm "$3.mp4"<br /><br />MP4Box -isma -hint -add "$3_video.h264#Video:fps=23.976" -add "$3_audio.aac" "$3.mp4"<br /><br />rm "$3_video.h264" "$3_audio.aac"</div><br /><br />Running the script involves something like...<br /><div class="code"><br />./x264Encode2.sh Terminator2.vob 3800 Terminator2<br /></div><br /><br />The first argument is the source file, the second is the target bitrate, and the 3rd is the target file. I do not put an extension on the target as the script will do this. A bitrate of 3800 seems high enough to produce an encode that looks nearly identical to the source DVD.<br /><br />You will probably need to tweak the VIDEOFILTER crop filter according to your video source. You can run mplayer on your source using -vf cropdetect to determine the correct cropping arguments. Of course you can also change the ENCOPTS to your liking, although I find the current options acceptable.<br /><br />Once the oncode is finished you have a nice MP4 file. The avi file is left after the encode in-case something go wrong so you don't have to re-encode the source. If everything goes ok, you can delete this file.<br /><br />Let me know what you think.<br /><br />EDIT: The PS3 seems to be picky about the video resolutions, be careful when cropping to strange resolutions. I'll try to investigate this more, as I ran into a problem with a cropped wide screen video. Removing or tweaking the crop made a short test encode work.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-79736087716787941402008-09-24T21:20:00.003-04:002011-01-19T20:40:35.322-05:00Callbacks, the trick pointer you executeAt work I tend to develop programs using C#, which I don't particularly like but it does have some neat features. One of the features that caught my attention are delegates. Delegates are essentially function pointers (actually they do a little more than that). A function pointer is just a pointer to a function much like a pointer to a variable. I found it extremely interesting how delegates are used in ASP .NET for event driven processes.<br /><br />I began wondering if I could implement similar systems in my own projects using C++.<br /><br />Callbacks in C++ provide a way to call functions by executing function pointers. In C# delegates are type safe, this helps ensure you don't end up passing and receiving junk data to and from your function. Callbacks are not quite as nice, they are not necessarily typed and usually make heavy use of void pointers and unsafe type casting. To achieve typed callbacks in C++ I started making a templated callback class that allows for typed arguments and typed returns. Using a templated library helps ensure type safety and provides excellent code reuse when using callbacks with various types. Using the templated callback library, making a callback is as easy as...<br /><br /><div class="code"><br />void myFunction(int x)<br />{<br />...<br />}<br /><br />Callback1<int> myCallback( myFunction );<br /></div><br />Executing the callback is then as simple as<br /><div class="code"><br />myCallback(7);<br /></div><br />Why would we want to go through all this trouble just to run myFunction? Well, think about a system that executes functions but the functions are not known until runtime. Event processing tends to be one such system.<br /><br />For event processing you could have a function that contains one giant if else or case statement for every possible event. Or you could make a system where you register functions with a type of event, and when the specific event occurs, a callback to your function is executed. This type of event system is exactly what I used callbacks for.<br /><br />In the Stepmania Online game server I am developing, players are able to enter commands from the chat interface to manipulate the server. Since the events the server is handling are text strings, a switch statement is not an option. With a couple commands, multiple if else statements is an ok solution. However once the number of commands begins to grow, the if else statements become very long and ugly. To solve this I made a list of callbacks, each associated with a string command. I then "registered" each function with a string command by creating a callback to the function and adding it to the list along with the associated string command.<br /><br />When a command is received, the program simply loops through the list of command stringsand callback. When a match is found between the stored string and the command, the callback associated with the command is executed. No giant if else statement required!<br /><br />The real power in this system is that extra commands can be created by just compiling and linking in extra commands. This means, the game server could be released as a library and the chat commands can be expand uppon without touching or recompiling the core of the server. Very useful!<br /><br />I have found that it takes a while to fully grasp the concept of callbacks, but it is well worth the effort.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-17035039913120125712008-09-04T06:25:00.008-04:002008-09-04T08:25:58.412-04:00Revisiting old codeThis past week I began working on Mercury2, which will be almost complete rewrite of my first game engine Mercury. So far I have only worked on the render window. While working on it i decided to look at my old code from Mercury1 to try to figure out how I had done it. When looking back at old code, its amazing to see how much your coding style changes and improves over the years. Its been over three years since I wrote the windowing code for Mercury and it certainly shows.<br /><br />Looking at my old window code, it was clear I didn't have any understanding of what I was doing when I wrote it. Actually a lot of it was taken (and giving credit to) from NeHe lesson 1 and was modified to be a little more flexible (in my mind at the time). The result worked well, but the code was a pretty convoluted. I decided it was best to sit down and review my old code to try to figure out exactly how it was supposed to work. So with a long of review of my old windowing code, and some heavy reading of MSDN documentation I developed a much more solid window system.<br /><br />Moving forward in the project I think it will be difficult to properly design the project knowing I have large amounts of "old code that works" that I could fall back on.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0tag:blogger.com,1999:blog-1268396482010826357.post-63766190433005593632008-08-17T19:57:00.000-04:002008-08-17T19:57:22.940-04:00Open source development vs. business developmentOver the past eight months at my current job I have had an opportunity to work with various people in designing new programs to fit their business needs. While working on the projects, one aspect has become clear, developing free open source software is vastly different from developing software for your employer.<br /><br />The writing of the software isn't so different. You still need to plan your software(based on requirements), write it, and then test it. In my experience, many open source projects are planned rather poorly. They are then initially coded up rather quickly to a point of working "well enough". The programs tend to be somewhat tested by the developers but rely heavily on "community testing." This often results in more than a few negative user experiences. Development of many open source programs begin to stagnate after a while and end up in a continuous state of development, never really being done.<br /><br />In business environments, software development I have found works differently. Planning is done with a "client". Rarely does the client actually know the full requirements of the software they are requesting. I generally try to understand the client's current business process they are trying to enhance/automate etc. While discussing the business process with the client, we try to determine the actual requirements based on the business process. (I have found that the client doesn't always know the complete process or may forget parts. Its good to involve a few people that know and work with the process.) One fact about the software requirements, they will change throughout a project, no matter what.<br /><br />After the requirements are known, the real work can begin. However unlike most open source software there is a very real and probably very near (closer than you would like) deadline. There is little time to be spent refactoring poor inflexible code. Its very useful to take the time upfront to really think about your approach, you also need to plan for testing. I have found that testing generally takes just about as long as writing the program. This is not necessarily because of a large number of bugs but rather who is testing and how much attention they give to it. I like to involve the client in the testing process as they know the process the best. They are the ones who will be able to spot logic errors in the process. The clients tend to think about the process much more differently than myself as the programmer, what was clear to them may not have been clear to me. Anyhow, testing... allow ample time for it. I often find myself and the client testing right up to the deadline. Even after the deadline problems are often found while the software is being used in a production environment. Testing is very important when your software will have a real impact on businesses and user impressions of that business.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com1tag:blogger.com,1999:blog-1268396482010826357.post-74592662963284985472008-08-01T18:38:00.003-04:002008-08-11T18:39:09.854-04:00A Little BackgroundCurrently I'm a software developer working for a small college. I work with a variety of systems including HP Unix , Mac OS X, and Windows. On these various <span class="blsp-spelling-corrected" id="SPELLING_ERROR_0">systems</span>, I program using <span class="blsp-spelling-error" id="SPELLING_ERROR_1">perl</span>, <span class="blsp-spelling-error" id="SPELLING_ERROR_2">php</span>, and C# .NET while developing web based applications.<br />In addition to my current position I have worked extensively on several open source application including <span class="blsp-spelling-error" id="SPELLING_ERROR_3">Stepmania</span>, <span class="blsp-spelling-error" id="SPELLING_ERROR_4">Stepmania</span> Online, and The Mercury Game Engine. These projects have <span class="blsp-spelling-corrected" id="SPELLING_ERROR_5">incorporated</span> many different programming techniques and paradigms. I'm am always experimenting with new (to me) programming techniques.<br />This blog will be focusing on interesting aspects and discoveries made during the course of my software development.Axlecrusherhttp://www.blogger.com/profile/10787508524742109398noreply@blogger.com0