Tom Duckering

Icon

a web log of my technical stumblings

How to stop iPhone from using mobile data for iTunes Match

Looks like Apple just moved a useful setting. The one which stops iTunes from using up my mobile data allowance.

It used to be under the Preferences > Music but now it’s under Preferences > General > Mobile Data. Scroll to the bottom and find a switch for iTunes.

Advertisements

Filed under: Uncategorized

Chrooted services and pidof failure

I’ve just spent the past hour or so digging into why haproxy was failing to be stopped on one of our CentOS 5 machines. The symptoms were that it was failing to start because there was already an old instance of it listening on the same port. The obligatory socket binding failure error messages where clear there.

The next problem to solve was why could that old instance not be stopped? Reading the /etc/init.d/haproxy script, it was apparent that it wasn’t using a pid file to know what to kill. Instead it was falling back on using the pidof tool to find the pid for the running daemon. Furthermore for some reason, the /etc/rc.d/init.d/functions uses pidof with the -c switch. This switch causes pidof to ignore any process that doesn’t have a matching root. That is, if the daemon has been chrooted then if you’re not under the same chroot, pidof will filter it out.

This was our problem. HAProxy was being chrooted and so the init.d script was unable to find it.

Our temporary fix was to stop haproxy being chrooted. However long term it’d be better to have the init.d script use the processes pid file.

Filed under: Uncategorized

NetCat, FIFOs for Fun with Servers over TCP

I wanted to interact with a beanstalkd queue server to delete all the items in a queue (tube in beanstalk language). It didn’t have a command to drop an entire queue, but fortunately the interface is just ASCII over TCP. I know that I can use netcat and some scripting but it’s not totally straight forward since need to maintain a connection since it held state about my current queue.

The following code represents a script which is echoing commands to the server and reading it’s responses. It uses echo -e so that I can add a line terminator \r that the server expects. I have to read the response immediately (for reasons that I shall explain later) using read. I issue a couple of setup commands to make sure I’m using the right queue and not picking items from the default queue. Then I enter a loop to issue a reserve command and then fetch the job id and use that to issue a delete command with that id. I repeat until I get a response from the server which indicates that I’ve not been able to reserve a new job.

It’s all a bit tricky because Unix allows you to easily connect one process to another when one process’s output is to be the input for another process. It gets a bit harder when you want to connect them in both directions. To do this you can use a named pipe (aka a FIFO) – a basic filesystem object for allowing simple interprocess communication. You can make them very easily like so:

mkfifo myfifo

To have the script work with netcat you do this:

./delete_queue.sh < myfifo | nc localhost 11300 > myfifo

This allows the delete queue script to talk to the server via an anonymous pipe and receive the response via the named pipe (myfifo).

delete_queue.sh source:
echo -e "use myQueue\r"
read LINE
echo -e "watch myQueue\r"
read LINE
echo -e "ignore default\r"
read LINE

while true
do
echo -e "reserve\r"
read RESERVED_RESPONSE_1

if [[ $RESERVED_RESPONSE_1 == RESERVED* ]]
then
JOB_ID=$(echo $RESERVED_RESPONSE_1 | cut -d" " -f2)
echo "Job ID is ${JOB_ID}" 1>&2
read RESERVED_RESPONSE_2
echo -e "delete ${JOB_ID}\r"
read DELETE_RESPONSE
else
exit 0
fi
done

Filed under: Uncategorized

Return code for init.d scripts status gotcha

I just ran into a problem with the Cassandra init.d script whereby it does not return the right status code when you ask it for Cassandra’s status. It returned a 0 when the service was stopped.

>/sbin/service cassandra stop
>/sbin/service cassandra status
> echo $?
0
>

However when you ask an init.d script for the status of a service it should return status codes based on the following rules:

https://fedoraproject.org/wiki/FCNewInit/Initscripts#Init_Script_Actions

It doesn’t present much of a problem if you’re installing and controlling your services manually. However when you start to use config management tools like Chef or Puppet then programmatic indication of service status is far more important. In this situation we were using Chef and it believes that the service is started when in fact it’s stopped.

Not the first time I’ve seen this issue.

Filed under: Uncategorized

Jetty + Akamai = Expire Header Evil

I was hunting around on the web to see if anyone else had come across this dangerous combination, but it seems that in web terms we are the only ones. So, in order to benefit others I figured I’d post this here.

Our problem was that when we were requesting content via Akamai edge suite from our origin Jetty based web app we were getting Expires header values with the year set to 2019! Frankly disastrous for our main page.

It took a little while to trace but it turns out that we are using a version of Jetty which uses the incorrect date format for it’s default Expires header. It sets Expires to be 01-Jan-1970. Looks ok, right? Well wrong. The relevant RFC states that 01 Jan 1970 is legal, and 01-Jan-70 is legal. Conversely, using a four digit year is illegal when you use hyphen separators.

But that’s not the problem. You’d expect that any compliant caches and browsers would spot the bad header format and just drop it. However not Akamai. It seems that it merely takes the first two digits of your year and re-writes these as a two digit year meaning that you would likely see Expires headers set with the year 2019 or 2020. Compared to 1970 that is a 49 year difference and compared to the date now that is an 8 year difference. In web caching terms that’s forever and very bad if you end up caching dynamic resources.

Fortunately the impact was small for us but this could have been a lot worse. We ended up putting a very quick fix into our BIGIP load balancer to strip the offending headers at our origin to stop it poisoning any more caches and we then fixed our code to explicitly set the header rather than rely on Jetty.

Filed under: Uncategorized

MySQL Users with Wildcard Host Not Working…

I’ve just spent an hour or so scratching my beard with a problem in MySQL. I created a user with a wildcard (%) for the host so that this user can log in from any network location.

create user 'user'@'%' identified by 'password';
grant all privileges on database.* to 'user'@'%'

But for some reason I couldn’t login from remote locations. Turns out that I should have read the documentation more closely since if there is a user which has a host which more specifically matches your host (i.e. where you’re logging in from) then it will use that instead. So I looked and I had a anonymous user (i.e. blank username) with an exact hostname match for my server.

I should say that I plan to restrict the % in the interest of security to be %.mydomain.com so that we lock it down a bit more.

Removed that and it worked like a dream. Winner!

Filed under: Uncategorized

Ivy and Configurations – The Great Confusion

In the course of my work I’ve been getting to grips with Ivy. The documentation is reasonable when it comes to thoroughly describing the various settings files, Ant tasks and so on. However I find it’s lacking when it comes to explaining concepts. One aspect in particular I’ve had problems understanding is configurations. Frustratingly it seems that this rather generic word is overloaded slightly.

In terms of a module we might wish to publish it means a named subset of artifacts for a given module. For example, release jar, source jar or a documentation jar.

However in terms of a module that we are declaring dependencies for, a configuration is a collection of dependencies that we want to fetch for some reason. For example, jars for testing (i.e. junit, htmlunit), jars for runtime (log4j), jars for compile time (servlet) or jars as tools (jetty).

I only discovered this thanks to a clarifying post to the Ant mailing list by Archie Cobbs where he says:

Think of it this way:

Rule #1: A configuration is just a name for some subset of a module’s
artifacts.

Rule #2: A dependency defind in module A’s ivy.xml that looks like:

<dependency name=”B” conf=”foo->bar”/>

simply states that “when someone is asking for the artifacts in
configuration ‘foo’ of module A, then we’ll also need the artifacts in
configuration ‘bar’ of module B”.

Original message here

Glad that I finally understand this. Hopefully this will help someone else who’s struggling to grok this.

Filed under: Uncategorized

Monitoring File System Activity on Mac OS X

fs_usage is very helpful tool when you just want to know what some process is doing with your file system.

fs_usage -w -f filesys

-w for wide, aka more info.

-f filesys for just filesystem info.

is the process you want to watch.

The man page is nice and straight forward too.

Filed under: Uncategorized

Weblogic NoClassDefFoundException Resolution

I’ve been meddling with Weblogic server in an attempt to migrate to it from Tomcat (don’t ask). Apart from it being a bit of a pain to get running it also seems to behave in a slightly odd way when it comes to its classloading of libraries included in the WEB-INF/lib directory of a web application.

I was getting the following error when my code was hitting string template code:

CharScanner; panic: ClassNotFoundException: org.antlr.stringtemplate.language.ChunkToken

Despite the fact that the stringtemplate jar was in WEB-INF/lib it turns out that weblogic doesn’t load them by default. I think it seems to prefer that you deploy your libraries in some central location. To get it to behave you need to add a line to the weblogic.xml – which must reside in the WEB-INF directory alongside your web.xml (if you’ve not transitioned to using annotations).

<weblogic-web-app>
<container-descriptor>
<prefer-web-inf-classes>true</prefer-web-inf-classes>
</container-descriptor>
</weblogic-web-app>

Not to complicated but it took me a while to get to the bit in the documentation where it mentioned this.

Filed under: Uncategorized

Puppet Doesn’t Play Nice With Really Big Files

To kick things off, I will share some of my learnings about puppet. To set the scene I’m using it with a very small number (<10) of AWS EC2 instances. Given a vanilla EC2 instance running a rightscale CentOS image I do a quick set up of puppet:

yum install -y puppet
echo "server=<hostname of my puppet master>" >> /etc/puppet/puppet.conf
puppetd --test

This first invocation of puppet will not do anything as there is no signed certificate on the puppetmaster. So I jump back to my puppet master server to check for the newly received, but as yet unsigned, certificate:

puppetca --list
Find the unsigned cert and sign it with:
puppetca --sign <cert name>

Back on the new instance I call puppet again:
puppetd --test

Now with the cert signed it can do its thing. It always seems to run ok the first time, then subsequent runs are subject to “random” errors. I think it’s down to the fact that I’m getting puppet to serve up a couple of reasonably large installation binaries (c. 350MB)

file { "/var/src/biginstaller" :
source => "puppet://my_puppet_master.compute.amazonaws.com/infra/config/biginstaller",
mode => 755,
owner => root,
group => root
}

My suspicion – based on some googling – is that my puppet master is running low on resources as puppet computes the hash of any files it serves. It does this, sensibly, to see if it needs to refresh that file, but doesn’t have to on the first go as the file is initially absent. However I understand that it computes this hash – naively – by loading the entire file into memory – it seems that this has a knock on effect on the machine (which is an m1.small) and the file server starts to time out.

So I have now switched the installers to be fetched from an AWS S3 bucket using an exec class using wget – like so:

exec { "Fetch Big Installer" :
path => "/usr/bin:/usr/sbin/:/bin",
cwd => "/var/src",
command => "wget --no-check-certificate https://s3.amazonaws.com/my-installers/biginstaller",
creates => "/var/src/bigintstaller",
require => File["/var/src"]
}

It seems to be working ok so far. I’m keeping other bits of config that are a bit more tailored than installers to be served using the normal puppet route. This keeps my custom config and other small scripts under slightly tighter control.

Filed under: devops,