March 09, 2006

Switching from Comcast to iTunes?

So, it looks like Apple is finally introducing some new ways to purchase video from the iTunes Video Store: the Multi-Pass and the Season Pass. In theory, if all of my favorite shows were offered on iTunes, I could simply subscribe to the appropriate multi-pass or season pass, and download all of my shows to my Mac using my Internet connection. Then, there would be no need for Comcast, my current cable provider. In theory, this should cost me less money, because instead of paying for cable TV and Internet, I can just buy Internet. But, let's try and figure it out.

My cable bill for regular, non-digital cable is right around $50 a month. For that $50, my MythTV machine is recording 2 current-events style shows (the NBC Nightly News and The Daily Show with Jon Stewart), 4 hour-long dramas, and 5 half-hour sitcoms.

Now, let's do some math. I am paying $600 per year for cable TV, and out of that I am watching 11 shows. So, the cost-per-show per-year comes out to $54. In order for iTunes to be compelling, it needs to beat that number. Apple hasn't announced the pricing for a season pass yet, but we can assume that it will be less than $48 ($2 per episode times 24 episodes). So, things are starting to look pretty good.

Unfortunately, things like the Daily Show are going to be more expensive, because you get a lot more than 24 episodes of that in a year. And in fact, the multi-pass for the Daily Show is $10 per month.

So, my math has to get a little bit more complicated:

9 season-pass shows * $48/season = $432
2 multi-pass shows * $120/year = $240
Total for a year's worth of TV on iTunes = $672

Only $72 more than Comcast. That's not too bad, when you consider that:

  • There are no commercials in the video content served through iTunes (for now).
  • The shows are all portable -- I can watch them on my PowerBook, or a video iPod.
  • I would actually own the shows -- if I had enough local storage, I could save them all, and go back in time and watch whatever I wanted, whenever I wanted.
  • Things like the NBC Nightly News are already available on the Internet for free, so it might be free on iTunes too.
  • A season pass will hopefully cost less than buying each episode individually, which makes the above number look better.
  • Not all shows have 24 episodes in a season (i.e. "Battlestar Galactica"), further making the above number look better.

Of course, the negatives are that the quality of the video isn't as good (the resolution is about a quarter the size of regular TV resolution) and the content won't be available until a day after it originally airs on broadcast TV. But I very rarely watch things in realtime anyway, so I don't think that will be too big of a deal for me.

So, I'm going to be watching the pricing for Apple's Season Pass content with interest. If it is really cheap, then I could potentially start saving some real money versus buying cable TV.

-Andy.

Technorati Tags: , , , , ,

Posted by andyr at 12:26 PM | Comments (1)

December 23, 2005

Merry Christmas to Andy, from Andy

So, in typical "guy" fashion, I left all of my christmas shopping until very nearly the last minute. I managed to get everything done during the course of a four hour shopping sprint today, which is great. What is not so great (from my wallet's perspective), is that I was bitten by the "impulse purchase fairy", and picked up one of those new-fangled Nokia 770's at CompUSA:

My new Nokia 770 Internet Tablet

My initial impressions, after mucking around with it for a bit this evening, is that this little device is going to be a worthy investment. The screen is pretty amazing (as you can tell from how macnn.com looks), yet the device is super tiny and light weight.

Expect more nerdy ramblings as I play with my new toy.

-Andy.

Technorati Tags:

Posted by andyr at 09:57 PM | Comments (0)

December 03, 2005

DSL Downtime

So, to add complication to everything else that is going on, Kevin's and my DSL connection went down last night. It all started a few weeks ago, when Kevin got onto his latest kick, which is to try and do his own server/MythTV/Linux box. I told him that if we wanted to do his own Internet thing, he could buy a cheap Ethernet switch, and peel off his own slice of Internet before it goes into my server/router/NAT/firewall, redefine.

And of course, because Kevin does everything that I say, he went out and bought an Ethernet switch, to go with the new machine. With DSLExtreme (our ISP), we have like 8 static IP addresses. So last night (before heading out to the movies with Pratima and Kalpana), I went on DSLExtreme's website, and added a second static IP address to our account, for Kevin.

And that is when the trouble started.

I think something like 10 minutes after I added that second IP, our Internet was down. Of course I didn't notice for a few ours, since I was out. And by the time I did notice, it was late, so I just went to bed, hoping it would be all better when I work up.

Well, it wasn't. So I called tech support today, and they told me they would check the line, and call me back in 30 minutes.

Well, they never called back. So, approximately 12 hours later, I called them back, but this time, from Illinois. After the tech support guy spent some time messing around, he said that there was a problem with the router (which I surmised on my own), and that he would call me back in 30 minutes after he escalated the issue.

This time, he actually did call me back, but unfortunately, it was an instance of "good news/bad news". The good news was that he fixed the router. The bad news is that he changed my IP address. This is bad, because I don't have any out-of-band way to get at redefine, in order change the IP configuration of my machine.

But luckily, as it turns out, I do have out-of-band access to redefine -- Kevin. Thankfully, he was at home, and I walked him through re-configuring redefine, and so now I am back in business.

Woo-hoo! Geek tax paid. Thanks Kev.

-Andy.

Posted by andyr at 11:37 PM | Comments (0)

November 21, 2005

Upgrading my MythTV machine to Ubuntu Breezy

In the mood for inflicting pain on myself, I decided to upgrade my perfectly functioning MythTV machine, which was running Ubuntu 5.04 (Hoary), and upgrade it to Ubuntu 5.10, the Breezy Badger. I didn't have any really good reason to attempt this upgrade, except for a morbid curiosity as to how it would work. Or not work, as the case may be. After performing the upgrade, it seems like apt decided to wipe out all of the MythTV packages, instead of upgrading them to the newest version.

After re-installing these packages, I found:

  1. The mythbackend process could no longer login to the MySQL database,
  2. The ivtv drivers for my Hauppauge PVR-250 TV capture card were not included in the new 2.6.12 kernel,
  3. The IR drivers for the remote control of the PVR-250 were not included in the new kernel, and
  4. The "mythtv-themes" package doesn't appear to be in any of the official Ubuntu repositories, rendering the mythtv-setup and mythfrontend programs un-runnable.
So, as it turns out, I spent about 20 minutes doing the upgrading, and about two hours hacking my way through the aftermath, getting the machine to a state comparable with where it was when I started. And it was all finished in time to record the NBC Nightly News, as it does at 5:30 every day.

Fun!

-Andy.

Posted by andyr at 12:24 AM | Comments (0)

November 20, 2005

Kevin's latest PC

So, ever since I bought my new iMac, this has brought "computer buying frenzy" to Sunnyvale. Kevin has upped the ante by purchasing two machines -- an iMac (for GarageBand), and a Sony of his owny (for Linux, Apache, and MythTV). The iMac hasn't arrived from Apple yet, so Kevin scooped up the Sony VGC-RC110G today, to start playing with that first:

DSC04059.JPG

In the Reitz family tradition, I made him take the machine apart before anything else happened with it. One of the reasons why Kevin chose this particular machine is because it is supposedly very quiet. Taking a look inside, this could certainly be the case. The 400W power supply has quite a large fan in it, which hopefully will spin at a lower RPM. The video card doesn't has only a heat sink (no fan), and the CPU has a heat pipe (potentially water cooled) combined with the biggest heatsink that I have ever seen (and I've seen the inside of the PowerMac G5). Sitting beside the heatsink is an even larger fan that what is in the power supply.

So, there is every possibility that this could be one quiet machine. I don't think it will be quieter than my iMac, but it will certainly be quieter than my Dell Precision Workstation 420 (which has at least one fan that is in some stage of going bad, so has been making quite an annoying racket for months now. But not annoying enough for me to fix it!).

Anyways, hardware-wise the Sony PC consists of an Intel 945P chipset (on an Intel-made motherboard), a Pentium D 830 (dual core Pentium 4 running at 3.0Ghz), 1Gb of RAM, and a 250Gb SATA disk. The machine also includes an ATI X300 PCI-Express video card (which can probably be made to work in Linux), and a Sony "Giga Pocket" video capture card. This card doesn't appear to be supported under Linux, but I found that it has a Conexant CX23416-22 chip on it, so getting it to work under Linux might just be a possibility. I am encouraging Kevin to work with the folks behind the ivtv project to see if this can be made a reality. I think it would be a nice story to take a piece of hardware that initially only worked with Sony's proprietary TV capture software, and now has been expanded to work with Windows Media Center Edition, work in Linux.

Expect more updates on that, and Kevin's general progress with this machine, in the coming months. For now, there is a gallery of photos available for your enjoyment.

-Andy.

Technorati Tags: Linux, Sony, VGC-RC110G, , MythTV

Posted by andyr at 11:10 PM | Comments (0)

November 16, 2005

Goodbye Slashdot, hello digg

Late last week, I saw a link called "digg vs. slashdot" posted to O'Reilly Radar. Curious, I skimmed the article, and then checked out digg. At first glance, it didn't really grab me -- but then I read the "about" blurb, and found that is like Slashdot + Wiki, and got intrigued.

O'Reilly has also come through with a link to a BusinessWeek article about digg, and I'm sold. I have been pretty unhappy with Slashdot for awhile now -- duplicate postings, low signal-to-noise ratio in the comments, etc. I have never even bothered to get a Slashdot user account, because I just don't see the point. I have never bothered to add the site to my RSS reader, and I have gotten down to checking it only a few times a week.

But all along, the basic problem with Slashdot hasn't been the site itself -- but rather it's editorial approach. And it is really looking like digg is fixing that, in a social software, "let's harness the collective intelligence of everyone", sort of way.

Which I really like. So, check out digg!

-Andy.

Technorati Tags: Slashdot, digg

Posted by andyr at 01:04 AM | Comments (0)

November 03, 2005

Distressed about DRM

I read a fantastic article over at Ars Technica about the MPAA's latest attempt to add insane DRM to all of our lives. Basically, the giant content conglomerates are so afraid that people might see or hear their content without actually paying for it (gasp!), that they are going to great lengths to coerce government to coerce hardware manufactures to make devices that coerce consumers into playing by the rules. And of course, the rules are going to be written by the content conglomerates, so they will be more restrictive and draconian than ever.

Whenever I read an article like this, I find it to be really distressing. I think that if the MPAA were to succeed in all of their goals -- trying to consume content would become not only expensive, but annoying as well. I think that the more the content conglomerates try to crack down, the more people are either going to either:

  1. Turn the TV off,
  2. Turn to piracy (because as we all know, these restrictions won't stop the pirates), or
  3. Turn to small content producers, who are thrilled when anybody consumes their stuff.
I know that personally, I would probably do some sort of combination of the three. But I would much rather buy content, for a reasonable price, that lets me use it in the way that I want. So far, Apple has been doing a pretty reasonable job in this effort (although FairPlay certainly isn't perfect, not by any stretch of the imagination).

The optimist in me hopes that we'll have this sort of cheap and fair-use friendly content in the future, but currently, my inner pessimist is winning out.

-Andy.

Posted by andyr at 12:25 AM | Comments (0)

November 02, 2005

PuTTY on my cell phone

A few weeks ago, one of the high priced Tibco consultants that we had in the office mentioned that it was possible to obtain an SSH client for my Nokia 6600 phone. At the time, I didn't do anything about it. But I had some free time tonight, so I decided to do some research. And in fact, he was right -- some crazy folks have ported PuTTY to the Symbian OS, which is what my phone runs.

Behold!

DSC03843.JPG

Because I am buying the all-you-can-eat GPRS Internet plan from T-Mobile, I was able to SSH directly from my phone, through T-Mobile's network, through the Internet, to my FreeBSD machine that was about 20 feet away. That's sweet!

The screen and keyboard (specifically, the lack thereof) make this whole thing rather impractical. But I was able to run top, and even my most-favorite of all text editors, joe.

So, I think I have definitely scored myself a little geek toy that I can show off in the appropriate settings.

-Andy.

Posted by andyr at 11:55 PM | Comments (0)

October 15, 2005

Playing with FreeBSD 6

So, I picked up a spare desktop recently at work, and I have finally wound down enough of current development/deadlilne stuff to play with it. I decided to install FreeBSD on this machine, to serve as my sandbox for playing with hot new Open Source softwares (like Wikis, social bookmarking apps, etc.). My sub-goal, however, is to play with FreeBSD. My personal webserver in my apartment has been running FreeBSD 4.x for years, but I have kindof "lost track" of current developments in FreeBSD.

To that end, I installed FreeBSD 6.0 beta 4 (which I had on CD), and used cvsup and "make world" to upgrade to the just-released FreeBSD 6.0 RC1. So far, things have been quite smooth. FreeBSD detected all of the hardware, and for being not-yet-finished, things seem as stable and polished as ever.

Due to the fact that Ubuntu Breezy came out the other day, I configured X and Firefox on the FreeBSD machine, so that I could use it today while my main Ubuntu desktop was upgrading itself. The only problem that I had was convincing my Logitech USB trackball to make the "scroll" button emit a "middle click" in FreeBSD. This works like a charm on Ubuntu, but with FreeBSD, I had to do some hacking. I tracked the mouse management stuff to a daemon called "moused(8)". It seems like the problem that I had was that by default, my mouse was emitting button 4, but I wanted it to emit button 2 (middle mouse button). So, I found the -m option in the man page, which looked like it would do what I wanted:

     -m N=M  Assign the physical button M to the logical button N.  You may
             specify as many instances of this option as you like.  More than
             one physical button may be assigned to a logical button at the
             same time.  In this case the logical button will be down, if
             either of the assigned physical buttons is held down.  Do not put
             space around `='.

Unfortunately, the first 20 or so times that I tried this option, I couldn't get it to work right. The mouse buttons either didn't behave properly, or my button 4 didn't emulate the middle mouse button. Finally, after much struggling, I re-read that passage very carefully. The option is N=M, but the text immediately following that talks about assigning M to N. Confusing!

But, I made the proper adjustments, and now all is well. I used FreeBSD running xfce4 all day today, and pleasantly enjoyed the experience. We'll see what Ubunty Breezy holds in store for me on Monday.

-Andy.

Posted by andyr at 02:55 AM | Comments (0)

September 21, 2005

What is the best Windows RSS reader?

Without a doubt, the most frequently asked question that I receive when I evangelize blogging and RSS at work is "What is the best RSS reader for Windows?". Unfortunately for me, I do not have a good answer to this question, as all but stopped using Windows on a daily basis.

But, since 99.8% of EDS employees use Windows, and because I really want to promote blogging and RSS, I would like to have an answer to this question. So, with Google and Virtual PC by my side, I am going to list some popular Windows-based RSS readers in this post. Hopefully, this list, combined with the comments, will help me to arrive at an answer.

When I google for "best windows rss reader", I found the following:

Based upon the above research, it looks like the top RSS readers for Windows are SharpReader, FeedReader, and FeedDemon.

Well, I was hoping to be able to try a few of these out on Virtual PC, but I have spent an hour patching and installing the .NET 1.1 runtime on Windows 2000. So, I ask the Internet -- which of these readers is worthy of my recommendation?

-Andy.

Posted by andyr at 11:59 AM | Comments (3)

September 16, 2005

Solaris finally has industrial-strength logfile rotation

In all of my years using Solaris, I have always thought that their solution for rotating the core system log files was a joke. The "utility" (I use the term loosely) that Solaris used to ship with was called newsyslog. It was an incredibly simplistic shell script, and wasn't really re-usable, in terms of being able to rotate log files other than /var/adm/messages and /var/log/syslog.

For years, the Open Source UNIXes have included utilities for managing the system log files based upon a flexible configuration file. FreeBSD, for example, includes the newsyslog utility, which reads from newsyslog.conf, in order to decide which log files to rotate, and how they should be rotated. It's great, and I have thought for years that Solaris should have something like this too.

So, it was a pleasant surprise today that when I went to setup log rotation for Apache running on a Solaris 10 server at work, I found that Sun has finally beefed up this part of Solaris. Gone is the weak Solaris newsyslog, and in its place, we welcome logadm. It looks like this utility became available in Solaris 9, but it was news to me until today.

This logadm thing seems pretty nice -- it is super-flexible, in that it took care of all of the quirks around rotating Apache log files. One odd thing about logadm is that every time it runs, it re-writes the config file (/etc/logadm.conf for those of you keeping score at home) in place. For example, here are the two configuration lines that I wrote today:
# -C -> keep 5 old versions around
# -e -> e-mail errors to areitz@aops-eds.com
# -p -> rotate the file every day
# -z -> compress the .1 - .5 files using gzip. this means we don't need to
#       sleep before gzip.
# -a -> gracefully restart apache after rotation
/var/apache/logs/access_log -C 5 -P 'Fri Sep 16 00:24:48 2005' -a \
   '/usr/apache/bin/apachectl graceful' -e areitz@aops-eds.com -p 1d -z 1 \
   -R '/usr/local/bin/analog'
/var/apache/logs/error_log -C 5 -P 'Fri Sep 16 00:24:48 2005' -a \
   '/usr/apache/bin/apachectl graceful' -e areitz@aops-eds.com -p 1d -z 1 \
   -R '/usr/local/bin/analog'
The comments should explain some of the options. But the whole '-P' flag was added by logadm, after it ran. This flag tells logadm when it last successfully rotated the log file, so that it knows when it should rotate again. Kindof a nifty hack for performing that function, if you ask me.

-Andy.
Posted by andyr at 01:08 AM | Comments (0)

September 12, 2005

Hacking the Solaris partition table

It was our sysadmin's last day with EDS today, and as a result, I now have a systems administration aspect to my job. This means that lately I have noodling around with Sun hardware and software more than I usually do. What follows is the story of how I spent a good chunk of my afternoon:

Due to our local Jumpstart process not properly partitioning the two disks in the E220R that I was trying to build today, I was forced to take matters into my own hands, and fix things manually. Since I'm a hacker at heart, this didn't pose too much of a problem for me. What did pose a problem, however, is getting the E220R box into a position where I could perform surgery on the partition table. Even in single user mode, Solaris refused to unmount /var. Thus, I was forced to find some way to get a non-local version of Solaris running, so that I could perform my surgery.

To my knowledge, Sun doesn't ship any sort of Solaris recovery CD, not even on Solaris 10. Doing a quick Google, I found a few brave souls who have posted instructions for creating your own Solaris recovery CD, but I have things to do, and don't have the time necessary to craft my own CD from scratch. The trick that I know is to boot off of the Solaris install CD, and then break the install process before it gets very far. This can usually net you some sort of shell, which is usually mostly-sortof functional.

When I tried to do this today with a Solaris 10 CD, I found that once the installer started, it mucked with the TTY to the point that when I managed to break it, I couldn't see any characters that I typed, etc. In general, the shell that I got wasn't usable at all. So I tried again, and this time managed to break into the startup sequence before the installer launched, which provided a rather functional shell.

It really seems like Sun should make this easier, however, by providing some sort of bootable recovery CD. This is one of the "rough edges" that Solaris still carries with it, that the Open Source UNIXes have mostly smoothed over. Fortunately, because Sun has given Solaris the Open Source treatment, Sun doesn't necessarily have to provide such a CD -- the community could step up and do it. Another of the advantages of Open Source.

Anyways, after getting my E220R booted off of the CD, I was able to hack the partition table on the boot disk, run newfs, and have a machine with a preserved root partition, but enlarged swap and more importantly, enlarged /var. Mission accomplished, but only after considerable effort.

-Andy.

Posted by andyr at 11:34 PM | Comments (0)

September 09, 2005

FreeBSD 6.0 Beta 4 first impressions

By virtue of our IT person announcing that he is leaving EDS, my responsibilities are expanding. While I have prior experience with systems administration, and I have been dabbling in that space while at EDS, I think I'm actually going to have to get more serious about it now.

To get myself acquainted with a Dell PowerEdge 1750 server that we have, I decided to install FreeBSD on it. Seeing as how I haven't been on the "bleeding edge" of FreeBSD for quite awhile now (my home machine is still on FreeBSD 4.10+), I decided to give FreeBSD 6.0 Beta 4 a whirl.

I'm pleased to say that so far, it has been great. The install was a snap (well, mostly because they are still using sysinstall, which I have used many times in the past). All of the server hardware was automatically detected, including the Ethernet adapter, the built-in LSI SCSI Raid, and the dual Xeon processors. In fact, it appears as if SMP is finally enabled in the generic kernel, so I didn't have to re-compile in order to enable the second CPU (that's hot).

Unfortunately, it doesn't look like I'll be able to roll FreeBSD into production -- nobody else on my team has ever touched FreeBSD, and I'm not getting the "eagerness to learn" vibe. So, my options are either Solaris/x86 or Linux, and I think I'm going to take the Solaris/x86 route. But, in the meantime, I'm going to try and play with the new FreeBSD as much as I can. When 6.0 ships, I'm going to have to take a serious look at making the jump on my home server.

-Andy.

Posted by andyr at 11:19 PM | Comments (0)

August 30, 2005

After 230 days of uptime...

...I was forced to reboot my FreeBSD machine today. For the last several months, Kevin and I (mostly Kevin) have been noticing some terrible latency on our DSL connection, as high as 17,000 ms for the first hop. In the past, I have found that reseting our Westell CPE has fixed the problem. But when it happened again yesterday, the CPE reboot trick didn't fix the problem.

This really screwed Kevin over, who uses Microsoft Remote Desktop frequently, and planned to work last night. It was a mild annoyance to me -- with 17,000 ms latency, the web and e-mail were basically unusable. I tried calling tech support last night, but the 24 hour help line was closed.

So, when I woke up this morning, the first thing that I did was to call. Because I knew, that if I didn't fix this thing, I would have an all-out riot on my hands (from Kevin). After fighting with the technical support person for the better part of 40 minutes, we came to an impasse. He said that everything was working fine in the network, and I was saying that my FreeBSD / MacOS X combo was definitely not to blame.

I went to work with things still broken, and resolved to come home "earlyish", and hook my Mac directly to the DSL modem, and call again. That would be a scenario that is much more understandable for tech support, and then I could get some resolution. When I got home, I checked that the latency of the DSL link was still through the roof (it was). So, I hooked my Mac up, and checked again. And I'll be damned if things weren't fine. I surfed around, did a speed test, everything -- the thing was performing like a champ.

My faith in life shaken, I came to realize that my FreeBSD box was causing the problem. So, because I didn't have time to troubleshoot any longer, I rebooted it. And that fixed it. Argh!

The Geek Tax having been paid, I am hoping that this is not a sign of impending hardware failure. Or the apocalypse. Either of those two would seriously put me out, anyway.

-Andy.

Posted by andyr at 12:48 AM | Comments (1)

August 21, 2005

Blog redesign phase 1: color

I have commissioned Sara to help me spruce up my blog. She isn't into the "coding" part of the web (HTML, CSS, etc.), so I told her to handle all of the graphics (the part that I am bad at), and I would take care of the coding. To that end, I picked up the 2nd edition of Eric A. Meyer's (CWRU alum) book, "Cascading Style Sheets: The Definitive Guide" from Powell's when I was up in Portland. Today, I used that to help me make mock-ups of the first step in the re-design process: choosing colors. So, I present to you Sara's three proposed color schemes:
  1. Conservative
  2. Edgy
  3. Breaking down the geek blog stereotypes
I'm not sure yet, but I think I'm leaning towards #2.
-Andy.
Posted by andyr at 04:23 PM | Comments (4)

August 09, 2005

Hacking Windows from Linux for Fun and Profit

I, along with the rest of my team at work, am attending Java WebServices training at a Sun facility all this week. At my workstation, there is an old Sun Ultra 10 and a Dell Precision Workstation 210. One of the computers is loaded with Windows 2000 Server, the other with Solaris 9 (you can guess which is which). I found that I couldn't login to the Windows server, so today I decided to have some fun. I brought a Ubuntu Linux live CD in with me, and managed to get the Dell running Linux.

Unfortunately, when Linux booted, I found that the network wasn't working. It appeared as if Sun wasn't running a DHCP server for the lab -- which was confirmed when Chirag plugged in his laptop looking for network. Looking at the Sparc box, I found that it was statically configured. So, I ping'd for a free address, and gave the Ubuntu box an IP on the same VLAN. But no dice -- Sun apparently has separated the Solaris and Windows boxen on different VLANs.

My next trick was to run tcpdump. Usually, by analyzing the broadcast traffic, you can sortof figure out what network the machine is on, and what the default gateway is. From there, you can pick an IP, and be on your way. Unfortunately, I was able to see broadcast traffic from quite a few networks, so it wasn't plainly obvious which network was "the one" for me. I did some trial and error, but I didn't get lucky.

So, the only way in which I could see was to somehow figure out what IP address the Windows install was configured with, and then re-use that IP on Linux. And since I couldn't login to Windows, the only way I could think would be to mount the NTFS partition on Linux, and then munge through the registry until I found what I was looking for.

And believe it or not, that is exactly what I did.

I found this MS document which explains all of the registry entries that MS uses for the TCP/IP stack in Windows 2000. Unfortunately, that document isn't 100% complete -- it focuses more on the "tunables" in the stack. However, it references a whitepaper, which had the details of where things like static IP addresses are stored in the registry.

With that information in place, all I needed to know is which file on disk houses the "HKEY_LOCAL_MACHINE" registry hive. This page told me where that file is backed up, which gave me a clue as to what I should search for on disk. In short order, I was poking around the "%SystemRoot\WINNT\system32\config\system" file. The Ubuntu Live CD doesn't appear to contain any sort of fancy hex editor, so I just used xxd, which I piped into less. I was able to search around in that output, until I found what I wanted, and got the Ubnuntu box onto the network.

In general, this sort of hacking that I didn't isn't all that novel. In fact, there is a book out now, called "Knoppix Hacks" (O'Reilly), which details similar sorts of hacks that can be done from Linux. But, I am glad to have stumbled onto my own such hack, because now I get to play with Ubuntu during training. :)

-Andy.

Posted by andyr at 11:19 PM | Comments (1)

July 18, 2005

Damn you, Rushabh!

Last week, Rushabh started poking me about enabling SSL on redefine's webserver, so that we could post to our blogs securely. This has been on my TODO list for awhile, so I decided to start down this long, dark road on Saturday. After decoding much of the SSL certificate generation and Apache configuration crap that I needed to go through, I found out that the version of Apache that I was running didn't have SSL support compiled into it.

Drat.

So today, I uninstalled my old apache, and installed a new one that had mod_ssl compiled in. At first, everything was going swimmingly. I got Apache to agree that my new SSL-enabled config file was okay, and then restarted it. All was well, but SSL didn't work. I found that I had to use the 'startssl' instead of the 'start' parameter. And of course, after I figured that out, all hell broke loose.

To make a long story short, first apache wouldn't start. Some googling told me that mod_ssl rejiggers Apache's internal API, requiring all modules to be re-compiled. Great. After a tense half hour comprised of a lot of hacking (and apache getting random bus errors later), I managed to recompile all of the PHP crap, and now things appear to be stable.

whew

-Andy.

Posted by andyr at 11:18 PM | Comments (0)

July 15, 2005

RUG: SLM317: Adding Value with Automated Trouble Ticketing

The focus of this talk is on improving incident management, more for resource failures than end-user requests. Focus of this talk is on SIM. In the past, auto-generated tickets haven't been correlated, and there has been duplicate tickets submitted by users. Technology of Event Manager and Help Desk is advanced that it is worth another shot. Help Desk has more automation capabilities, Event Managers more dynamic.

Central issue is that alerts say what physical resource is broken, not what service is affected. Not possible to automatically notify users.

Solution: In the CS tradition, insert another layer in between EM and Help Desk. This is SIM (Service Impact Manager). Event Management can reduce event flow (filtering, duplicate detection, enrichment, etc.). Correlation not required by SIM model. Needs work to define service model -- can use discovery to determine infrastructure & some config/topology, but need to define actual user-preceived services by hand. Can do master/child tickets automatically. List of services affected in ticket can be dynamic (as additional services go down or get fixed).

IDEA: event suppression? Change tickets that you cut in HD could have CI information in them, and that could then flow into EM, to automatically suppress alerts during change.

My summary: The idea of a SIM seems like a reasonable one. I didn't get a lot of details about BMC's product, so I can't say if that is something that I would want to see in our environment or not. But I think that there is a lot of potential in the EM/SIM/HD combo for doing automation (which is my bread and butter at EDS).

Posted by andyr at 02:24 PM | Comments (0)

RUG: ATR521: Integrating your IT Management tools with BMC Atrium

BSM: aligning business with IT
  • requires business awareness and context across IT silos
  • tools need service centric view instead of infrastructure centric
  • ability to manage business services across entire IT infrastructure
Design around architectural linchpins
  • CMDB
  • Shared service model
  • Dashboards
BMC Atrium
  • Open foundation for information sharing and process collaboration across BMC products and 3rd party solutions
  • shared data repository, common user interface
Components
  • CMDB (Core BSM Data)
  • Service model (business relevance) (Core BSM Data)
  • web services and data access (common data abstraction layer)
  • reporting -- aggregates (presentation layer)
  • view -- management dashboards (presentation layer)
BMC Atrium Service Model
  • Provide Service Model Editor (operates on CMDB -- where model is stored?), API for accessing Service Model
  • Examples: SLAs, Change Management (determine which components affected are part of business service), asset management from service perspective
Notes:
  • In order to get Service Model Editor, have to buy SIM (today)
  • SIM maps incoming events to service model, and correlates to CI
My summary: It sounds like IT is evolving to think of services instead of just raw infrastructure components. I think that this has been happening for awhile now, and it is good thing, but it looks like the vendors are finally catching up. This presentation was pretty high level, so I don't have a concrete view of how good/bad BMC Atrium is. But, at least now I know what it is.
Posted by andyr at 11:27 AM | Comments (0)

RUG: DEV562: Safely Modifying a Packaged Application to allow Upgrade to Future Versions

NOT basic philosophy for upgrades: re-do all customizations for new version; Want to try and make it more automatic (or at least, as automatic as possible)
Automatic upgrades are possible, but there are some manual steps, and care must be taken.
The merge problem:
 * original app -> vendor new version ->
                |-> my updates to app -> NEW MERGED VERSION
  • How do you make it so that to independent teams making changes, don't conflict with each other?
What do you need to do?
  • Decide whether modification is really needed?
  • Keep an eye on where development is going (vender and internal)
  • Plan and follow process during upgrade
  • Test and resolve conflicts
Modification needed?
  • Important to the business, or just legacy?
  • Is it an add-on or built-in to the application?
  • Add-ons generally safe, built-in is intrinsic to application, much trickier
Where is vendor going?
  • Know vision and direction of the application supplier
  • consider the philosophy of the application
  • more likely to be compatible as upgrades happen
Where are you going internally?
  • Develop a vision and direction around the application
3 simple rules to follow when making updates
  • Do not repurpose fields (OK to add fields; OK to use existing fields for intended purpose; DON'T arbitrarily re-purpose fields)
  • Do not change existing workflow (OK to add workflow; OK to copy existing workflow and change the copy [pick different name prefix]; OK to disable existing workflow [upgrade will re-enable] - only change allowed
  • Do not change permissions (Add new groups/permissions, don't touch BMC's)
BMC Commitment (fields)
  • All fields and VUIs of BMC forms will have IDS in reserved range
  • During upgrade, will not modify or delete any field that is not a BMC field
  • BMC is free to change definitions of fields they own (OK to add new groups and permissions; don't modify perms on BMC's groups)
BMC Commitment (workflow)
  • During upgrade will not modify or delete any workflow that is not BMC's
  • BMC free to modify its own workflow in existing way
Make backups first
  • generate an export file of all definitions in the AR system server
  • can also backup at database level
"ardisabled" utility
  • Available from developer site
  • makes list of all disabled workflow (will need later)
  • Helps you find what needs to be re-disabled after upgrade
  • Run in import mode to do re-disabling for you
"arpermission" utility
  • records all permissions for your groups
  • creates a file containing a list of all workflow/fields/forms in the system and the permissions assigned for specific groups
  • developer community (soon)
After upgrade
  • Restore disabled forms and permissions
  • Restore views (which were exported before-hand) -- note that new fields will not show on view, and will have to be added manually
  • History/change data will be preserved
post-upgrade manual work
  • direct modifications to factory definitions, make again
  • modified qualifications of any application table fields need to be restored
testing
  • you need to run a suite of tests for the application after the upgrade

Posted by andyr at 10:40 AM | Comments (0)

At RUG 2005 today

I am attending the last day of the Remedy User Group (RUG) conference today. Much like I did for JavaOne, I plan to blog about each session that I attend. So, to all of my non-nerd readers: you have been warned.

-Andy.

Posted by andyr at 10:40 AM | Comments (0)

July 11, 2005

iPhoto -> Gallery

So, I've been using iPhoto to manage my pictures ever since I got my first mac. And while I'm not always happy with it, iPhoto does allow me to at least keep track of the pictures that I'm taking with a minimal amount of effort. iPhoto really falls down when you want to export your photos to the web. I don't have .Mac, so the only other option is some canned HTML that looks kindof funky.

So, I have been using Gallery to fulfill my pictures-on-the-web needs for some time now. However, one pain point has been getting my photos from iPhoto into Gallery. Basically, I have been doing a lot of manual effort, which has consisted of exporting pictures from iPhoto, scp'ing them to my server, then manually importing them into Gallery. The whole process is slow, repetitive, and generally sucks.

I had been thinking about trying to make things easier via Automator, when I stumbled across the free iPhotoToGallery software. This software does exactly what I want -- it provides an easy-to-use interface for exporting my photos directly from iPhoto to Gallery, without any of the annoying pain in-between. It seems like this software is a little rough around the edges, but so far, it has been working for me.

To celebrate, I have posted two new galleries of pictures: July 4th pictures from Chicago, and pictures from my trip to Antioch last Friday.

-Andy.

Posted by andyr at 11:48 PM | Comments (0)

June 30, 2005

JavaOne: TS-7208: JXTA Technology Beyond File Sharing

P2P makes sense now
  • more people (and machines) connected, more data at the edge
  • more bandwidth
  • more computing power
Limitations of Today's Internet
  • physical addressing model (URLs, IPs)
  • centralized DNS
  • no QoS for message delivery
  • optimized for point-to-point (limitations on multicast/anycast)
  • topology controlled by network admin, not applications
  • no search/scoping at network level
  • binary security (intranet or internet)
What is JXTA?
  • highly decentralized and reliable
  • network protocol for creating decentralized virtual P2P network
  • set of XML protocols, bindings for any os/language
  • overlay network, decentralized DHT routing protocol
  • mechanisms, not policies
  • open source (want it to become core internet tech, wide adoption)
  • www.jxta.org
What is different?
  • JXTA addresses dynamically mapped to physical IP
  • decentralized and distributed services (ID, DNS, directory, multicast, etc)
  • easy to create ad hoc virtual networks (domain)
What JXTA does
  • Brings devices, services, and networks together
  • enables interactions among highly dynamic resources
Sample applications
  • storage backup (321 Inc.'s LeanOnMe)
  • Brevient Connect web conferencing
  • grid computing - Codefarm's Galapagos
  • SNS - social network application (most used P2P app in China)
  • Verizon IOBI (trying to lower the cost of delivering content over the Internet)
Tangent: how do china's network filtering/blocking devices work with P2P apps? It seems like China has been doing two things -- trying to censor information coming in from outside of China, and trying to block Chinese-to-Chinese communications, when those communications don't toe the party line. While I won't say that it is impossible to filter/shutdown P2P, it seems like a hard problem.

JXTA status
  • JXTA-Java SE (June 15th release 2.3.4)
    • APIs and functionality frozen
    • Quarterly release schedule
    • full implementation of JXTA protocols
  • JXTA-C/C++ (2.1.1)
    • standard peer
    • extended discovery
    • linux, solaris, windows
    • rendezvous support
  • JXTA-Java ME (2.0)
    • edge peer only
    • CDC 1.1 compliant
  • community: C#, JPython
Looking ahead
  • enhance ease of use and simplify network deployment
  • enhance performance, scalability, security
  • standardize specification further through public organization
Summary: I am totally in love with the idea of peer-to-peer, and JXTA has been on my list of things to check out for awhile now. I need to see how I can preach this at EDS.
Posted by andyr at 04:04 PM | Comments (0)

JavaOne: TS-7318: Beyond Blogging: Feed Syndication and Publishing with Java

Starts with overview of RSS, including uses for RSS, news readers, etc. Then moves into some more technical details of an RSS feed. Shortcomings of feeds:
  • Users need one-click subscribe (no standards yet) - Safari RSS doing a good job
  • 10% of all Feeds not well-formed XML
  • Feeds can be lossy (don't poll often enough, will miss stuff)
  • RFC 3229 (FeedDiff) can be used to address this
  • Polling based -- traffic waste; HTTP conditional get and caching ease pain
FeedParser (SAX based) and ROME (DOM based) -- two Java libraries for parsing feeds. Atom is the future of feeds? Atom is becoming an IETF standard, which leads credence to this theory. It is a comprehensive and rigorous specification. RSS 1.0 uses RDF (ick), RSS 2.0 has most of the same functionality, but tosses the complexity of RDF. How to serve feeds:
  • ROME serialization
  • XML DOM serialization
  • Template langauge: Velocity, JSP specification
  • Plop it on a web server that supports conditional HTTP GET, etags, etc.
Publishing Protocols:
  • OLD: XML-RPC based, ad hoc, simple: Blogger, MetaWeblog (most popular), Movable Type, WikiRPCInterface
  • New: The REST based Atom Publishing Protocol
  • Why not SOAP?
Atom publishing
  • Supports features of existing protocols
  • plus administrative features (add/remove users, etc)
  • spec still under development, will be finished soon
Atom collections
  • resources exist in collections
  • examples: entries, uploaded files, categories, templates, users
What is ROME?
  • Java library
  • RSS/ATOM feed parsers and generators
  • built on top of JDom
  • Provides beans as API
Why ROME?
  • existing solutions were incomplete, stale, had unfriendly un-Java API
Pros:
  • simple to use, well documented
  • single java representation of feeds
  • pluggable
  • widely used, got momentum
Cons:
  • Loss of information at SyndFeed level (abstract all feeds, lose special features of specific feed types)
  • DOM overhead
Posted by andyr at 02:02 PM | Comments (0)

JavaOne: TS-7722: The Apache Harmony Project

Harmony is a new project from the Apache Foundation, to do an Open Source J2SE implementation.

Why now?
  • Lot of interest from community, companies
  • J2SE 5 is first version where JCP license permits doing OSS implementation of JVM
  • Sun cautiously supportive
Platform includes both VM and runtime (libraries) -- both critical to have J2SE implementation. Want freedom of license for components, flexibility of reuse (want to write your own JIT? Do that part, and reuse the Harmony runtime in order to get a complete system). Requires deep work (VM and JIT -- lot of hardcore CS stuff going on) and broad work (libraries -- just lots of code). They also want to build platform that is 100% compatible, and reasonably as fast. Goals: build community, build architecture, then get it finished and certified. The result will be to push Java into places where it isn't now, and where it cannot go.

Motivations:
  • Not going to fork Java
  • Not going to add new and incompatible technologies
  • too hard, OSS can't do it -- wrong, OSS is just license & community, says nothing about technology
  • license of VM is a major deal to some people
  • What about Mustang? Not Open Source.
  • Enable widespread adoption of Java, without re-engineering
  • Provide open and free platform for Linux and BSD communities
  • Java is 2nd-class citizen on Linux, Mono making big inroads...
  • Get at developing economies (like Brazil), that can't afford commercial licenses, or have government directive to embrace Open Source
Design will emphasize portability, so it can run on many platforms.

State of OSS Java:
  • Kaffe VM - borg of VMs (absorbs everything), focus on portability, but performance lags. Harmony is going to work with them
  • GNU Classpath - Java class library; long-running; work with them, but licensing issues
  • Jikes RVM - research VM; VM written in Java (little C and ASM for bootstrap); good performance characteristics
  • ORP C/C++ research VM (Intel research); under Intel license
  • GCJ - compile Java to native binary; lot Harmony can learn from them
  • KVM - run Java bytecode on Mono (fast); not Harmony approach, but learn from them
  • JavaL(?) - brazilian effort similar to Harmony
In summary, I think that Harmony is a great project. Java is going to need a truly open source implementation if in order for it to remain relevant and popular for the next 10 years. However, this is going to be a very hard project, on the implementation side. So, while the though of having an OSS JVM is exciting, I have to temper this with the fact that I think it is going to take Harmony a long time to produce a working JVM.
Posted by andyr at 12:29 PM | Comments (0)

JavaOne: General Session: Java Technology Contributions and Futurist Panel

I got to James Gosling's general session over a half hour late this morning (the balance I had to strike between Caltrain's schedule and my sleep schedule). Here are some highlights:

NetBeans has a new set of tools for developing code for mobile phones. Visual app. builder, can also deploy code to phone from NetBeans. Even cooler, if the phone supports the correct JSR, you can do debugging of the app. while it is running on the phone. Single step, breakpoints, etc. -- all over Bluetooth!

They did a demo of an UAV that is powered by real-time Java. The conclusions:
  • RTSJ determinism critical for navigational control
  • RTSJ JVM enabled significant productiivity gains over C++ (don't need to worry about low-level stuff like endianess, etc.)
  • Continue to perform research on newer RTSJ implementations
The summary for me is that I would like to check out real-time Java. Yet another thing on my list of "things to surf".

Sun is putting out multimedia versions of all of the JavaOne sessions onto the Internet, free for all (including slides, audio, etc.). Allowing Open Source contributions of translated audio tracks? It seems like people can watch in English, and record their own track, in a different language.
Shifting gears, the second half of the general session is a panel, about the future of Java. On the panel: James Gosling, Bill Joy, Paul Saffo, Guy Steele, and Danny Hillis (Applied Minds, Inc.). This was just a general talk on a bunch of futuristic mumbo-jumbo -- I wasn't really engaged to the point that I extracted anything interesting.
Posted by andyr at 12:29 PM | Comments (0)

June 29, 2005

Sun hardware: UltraSparc and V20z

I spent some time vendor pavilion at JavaOne today. One of the vendors that I spent some time pick on was Sun. In particular, I had hardware on the brain toady, and I managed to track down like, the one person at Sun's booth, who could speak about the present and future of the UltraSparc line. In addition, this fellow was there to talk about the V20z, so I asked him a few questions about that as well.

Regarding the UltraSparc, both the UltraSparc IV and UltraSparc IIIi are currently shipping. The UltraSparc IV has both CMT (Chip Multi-Threading, sort of like Intel's Hyperthreading, but better according to Sun) and multiple cores per CPU. It sounds pretty hot. Unfortunately, it seems like it is only going to appear in Sun's higher-end servers (5U and up), in high densities. The Sun person that I spoke to wasn't sure if it would ever materialize in a workstation, but was doubtful.

The UltraSparc IV is going after highly-parallelized workloads, as is the rest of the industry. However, my group at EDS is working with some applications that are stuck on Sparc, and aren't highly parallel. So, it seems like we're going to be using the UltraSparc III series for awhile. The good news is that the UltraSparc III is up to 1.6Ghz in speed now, which is not too shabby (for a RISC CPU).

Moving on to the V20z, I cut right to the chase on this one. i knew that the only reason to buy an X86-based server from Sun would be for the management features. Luckily, I was not disappointed. The V20z has two Ethernet interfaces for the purposes of management. Even better, once you configure an IP on the Ethernet, you can SSH to it, and get full access to the serial console (OS), or the internal management console! That sold me right there. In addition, the management Ethernet ports can act as a hub, which means that you can daisy chain a rack of servers together, and only take up one switch port for management. That is really, really cool. One thing is that you have to use crossover cables, because the management ports don't support auto MDI-X (while the main GigE interfaces do).

I don't really keep up with the state-of-the-art for PC servers, but I don't think that you can do SSH management of them. Sun is definitely kicking ass here.

-Andy.

Posted by andyr at 07:19 PM | Comments (1)

JavaOne: TS-3340: Architecting Complex JFC/Swing Applications

Where is the Pain?
  • GUI creation and maintenance
  • Threading issues
  • Widget <-> model binding (n-tier application)
  • Input validation
Reaching Nirvana
  • All about frameworks -- buying, building, or using them
  • OSS frameworks are ideal (great to have source code)
  • building should be last resort
No uber framework
  • Ideally, one framework would solve problem
  • doesn't exist
  • Two wannabees: NetBeans platform and Spring Rich Client Platform (RCP)
  • Vibrant space (new projects popping up all the time)
NetBeans
  • Doesn't address pain points for big apps
  • lack of documentation (yeah, I agree)
Spring RCP
  • Inspired by Eclipse RCP and JGoodies
  • Not released yet, lot of code in CVS
  • not ready yet, but keep an eye on it
No comprehensive framework, some smaller focused ones
  • JGoodies Swing suite ($$$) - has been released, some free parts
  • SwingLabs - Sun's OSS collection of helper frameworks. Also not out yet.
Solving the Pain
  • DRY - don't repeat yourself. Do something twice --> build framework
  • Think Ruby-on-Rails (interesting web framework)
  • Ruby-on-Rails was built this way, so it was a framework that was built in an interactive manner, only for functionality that was actually needed.
Reuse hard in Swing apps, because no strong conventions, GUI layout code can be everywhere.

Solutions
  • GUI builders, but with restrictions
  • Use GUI builder to generate binary artifact (like XUL), user-modifiable code sits outside
  • Load generated GUI at runtime, and manipulate it
  • Provide standard contract for screen creation
  • Introduce form abstraction (smaller than screen, larger than a widget)
  • Both practices increase opportunities for reuse
Framework examples
  • JFace has Window and ApplicationWindow classes
  • Spring RCP defines Page and View abstraction that aligns nicely with Screen/Form concept
Threading Issues
  • Swing components are not thread safe -- only supposed to be accessed by AWT event thread.
  • SwingWorker (new in Java 6, being back-ported to Java 5) and Foxtrot?
  • Comega - experimental programming language from MS, used as a sandbox
  • Concurrency is hard -- Servlet API is admired, because it is simple and single threaded, yet it scales up
  • Because there is a container that does heavy-lifting of thread issues
  • Container-managed Swing (Inversion of Control)
    • Have container control object, and provide services for object
    • Load and initialize screens on background thread
    • Handle asynchronous population and action execution
Tools
  • JFormDesigner - excellent GUI builder tool
  • Matisse - next version of NetBeans
  • FormLayout - 3rd party layout manager that kicks GridBagLayout in the you-know-where
  • IntelliJ IDEA - IDE
Summary: This was a pretty good talk. The take-aways are to go and keep tabs on some of these frameworks (they may need another year), and to look into some GUI builders and 3rd party layout managers.
Posted by andyr at 04:20 PM | Comments (0)

What sucks about JavaOne

Two things that suck about JavaOne: it is nearly impossible to find power outlets for my PowerBook, and not all of the rooms have WiFi. OSCON puts JavaOne to shame -- nearly every room has a giant block of power strips, for the geeks to plug their laptops in. It is deplorable that JavaOne doesn't have this.

As for the lack of WiFi, I'm finding myself a bit surprised by this one. I'm finding that since I am blogging the conference this year, having the web available is necessary in order for me to sprinkle links into my posts.

-Andy.

Posted by andyr at 03:59 PM | Comments (0)

JavaOne: TS-7302: Technologies for Remote, Real-Time, Collaborative Software Development

Collaboration Technologies
  • occurs within conversations, unlimited # of participants
  • all messages to all participants
  • conversations include multiple channels (conduit for information)
  • collablets provide interface to channel
Collablets
  • software component for specific type of collaboration
  • stateful within scope of conversation
  • only know about their own channel
  • uses XMPP (Jabber)
MOXC
  • message oriented XML collaboration
  • web services approach to collaboration
  • simple, just send XML messages using SOAP
  • send messages over any transport
  • can describe collablet API via WSDL, open to any web services client
All of these technologies allow for integrating collaboration into IDE. Jabber chat is one example, but can also send java code back and forth. Messaging system understands code -- code that you paste in IM is syntax colored, in the right font, same features of IDE (code completion). Can also send XML, HTML, formatted text, etc.

Can go further, and share whole files or projects. In shared file, can have shared editing, sortof like SubEthaEdit. It will lock the portion of the file, make change, when lock times out, will propagate change to other users. Remote users can compile shared project, which will actually happen on source machine (to snag all dependencies).

It seems like the point of the above technologies is to make it easy to implement your own collablets, so you can build custom collaboration modules that suit your particular project or work environment. Very cool.

What about screen sharing (code walkthrough, remote peer programming)? I didn't see the speaker demo this, but it should be possible to make a collablet that does it.

Links: Also thinking about sharing debugging environment as well.

Downside: need some sort of server to do it on Intranet, with Java Server Enterprise. Close to getting it working over any vanilla Jabber server (sweet!).
Posted by andyr at 01:05 PM | Comments (0)

JavaOne: TS-5471: Jini and JavaSpaces Technologies on Wall Street

Jini is a tool for building SOA's. The Master/Worker pattern is a common one in Jini systems.

Generic Virtual Data Access Layer (SOA)
  • Distributed data, bring together (left joins?)
  • Data is federated (many masters)
  • Expose common Grid-like API for clients (JAR), or WebServices, or even JDBC
  • Needs virtual data dictionary, metadata to describe data, to glue together
  • Decompose data access into generic workers -- break down request/response into many sub-tasks
  • Decouple SQL query generation from execution
  • Distribute workers, get parallelism
Scalable Real-time Transaction Processing
  • Parallelize transaction w/o losing FIFO ordering, and still getting 100% reliability
  • Clustered JavaSpaces -- make several instances. Make reliable by replicating data in these instances.
  • Build smart proxy that handles JavaSpace client requests, and distributes work into JavaSpaces cluster
  • Automate deployment and restart of JavaSpace instances, using dynamic service-grid architecture -- gets you dynamic scalablity (apache httpd forking style)
Why now?
  • memory was expensive -- not anymore
  • Bandwidth was a bottleneck -- not anymore (GigE, 10Gig)
  • Commodity HW finally enterprise grade
Building a Distributed Compute Grid Framework
  • design grid applications incrementally with Jini and JavaSpaces
  • from programmers perspective:
    1. how design application?
    2. how implement the design?
  • too much talk about design, not enough about programming
  • Good design always starts with something simple and evolves -- Jini and JavaSpaces make this easy: loosely coupled components, dynamic and flexible infrastructure
  • Features can be added as needed
Grid computing framework
  • master-worker pattern based compute farm
  • A layer of abstraction over JavaSpaces API and Jini programming model
  • Framework class design
    • Decomposer - concrete class will decide what correct subtask size is
    • Distributor
    • Calculator (processor)
    • Collector
    • Task
    • Result
    • Communicator - communication and synchronization among compute nodes
Putting the Spring into Grid
  • have to consider user interface for the programme
  • How can we take POJO model and bring it to Jini/JavaSpaces?
  • Get a lot of power in POJO approach, because we decouple from underlying system (be it Jini, J2EE, etc.)
  • Spring can do remoting without API via exporters on server side, proxies on client side (talk to exporters)
Summary: I like the idea of doing a Grid overlay of federated databases. This is something that I'm going to need to explore more on my own.
Posted by andyr at 12:44 PM | Comments (0)

June 28, 2005

JavaOne: BOF-9840: Make your java apps more powerful with scripting

These are rough notes, as I was only there for the first part of this BOF and it was pretty informal:

expose scripting interface to your program. allows 3rd parties to write code that interacts with your code (think plugin). Develop features and addons more quickly & cheaply.

idea: add beanshell to existing application. Once in place, you can use beanshell to poke and prod it, and figure out how it works. Questions like: "What happens to app if I change this value?" are easy with an application that supports BeanShell.

to support scripting in an existing app, may need to provide:
  • extra API to support scripting
  • debugging support
  • logging/diagnostic output
  • CLI for interactive control
  • Editing tools for recording/manipulating scripts (macro recorder)
Posted by andyr at 10:13 PM | Comments (0)

JavaOne: BOF-9335: Scalable Languages: The Future of BeanShell and Java-Compatible Scripting Languages

Doing JSR for BeanShell, to make it standardized, and potentially part of Java proper (someday). Will give more visibility and participation. BeanShell being developed by small team, so this will expand resources. New in 2.0:
  • Full java syntax compatability
  • performance: JavaCC 3.0 parser faster and smaller; caching of method resolution gives boost
  • better error reporting
  • Applet friendly (again) -- doesn't trip applet security; advantage of existing reflection-based implementation (do things w/o code generation)
  • new features: mix-ins, properties style auto-allocation of variables (can use BeanShell as more advanced java properties file)
  • Mix-ins: import java object into BeanShell namespace.
Java Syntax Compatibility
  • Full java 1.4 syntax support (on all VMs)
  • Some Java 5 features (all VMs): Boxing, enhanced for loop, static imports
  • Core Reflection doesn't allow introspection into core types -- added this in BeanShell 2.0
True Scripted Classes
  • Generated classes with real Java types, backed by scripts in the interpreter.
  • Scripts can now go anywhere Java goes.
  • Expose all methods and typed variables of the class.
  • Bound in the namespace in which they are declared.
  • May freely mix loose / script syntax with full Java class syntax.
  • Full java syntax on classes -- this, super, static, instance variables, and blocks. (no way to access superclass from reflection API)
  • Full constructor functionality.
Limitations:
  • Reflective access permissions (knocks out applets for the scripted classes)
  • bugs
New APIs:
  • javax.script (JSR-223) - will be a part of Java 6, powerful API for calling scripting languages from Java
  • BeanShell API compiler - have persistent classes backed by scripts.
Compiled API classes are like python -- you can take a .bsh script, and compile it into a .class file. Then it can be used by native Java code. Will it be like Python, in that if you modify the .bsh, it will automatically re-make the .class? I asked about this, and I found that this is sortof different from what Python is doing. Basically, the .class file only includes a stub, that wraps around the functionality that is implemented in the .bsh file. This stub allows the Java code to invoke the functionality of the .bsh file. Behind-the-scenes, BeahShell will launch a separate JVM for each .class file. This separate JVM will execute the code in the .bsh file.

New BeanShell community site, includes Wiki (J2EEWiki). Wiki site is beanshell.ikayzo.org/docs. Subversion for source control.
Posted by andyr at 10:10 PM | Comments (0)

JavaOne: TS-7725: J2EE 5.0 ease of development

J2EE specification doesn't go far enough -- need "helpers" in order to be productive and effective in order to produce a J2EE application. Certain "artifacts" are common, such as:
  • Generate entity beans from DB
  • Using resources (JMS, JDBC, etc)
  • Using patterns (service locator, etc) and Blueprints
  • Provisioning server resources
  • Verifying, profiling.
Offload some of these tasks to tools, others to specifications/core langauge/etc. How does this talk contrast with the Spring talk (which says that all of this crap should be handled by frameworks)?

J2EE 1.4 free tools:
  • Eclipse - Web Tool Platform (WTP) / J2EE standard tools (JST)
  • NetBeans - 4.1 just shipped (May), full support for all J2EE whiz-bangs
NetBeans notes:
  • NetBeans can do one-click compile-assemble-startserver-deploy-execute (Run)
  • Refactoring at J2EE level (class name change propagates to descriptors)
  • Ant native (project in NetBeans makes build.xml). Good for nightly builds!
  • Blueprints compliant -- what is this? Need to look it up. Looks like best practices for J2EE application layout.
  • Debugging: hides crap from application server in stack trace. Monitor HTTP requests. J2EE verifier tool.
  • Can get JBoss plugin for NetBeans.
  • Wizards for making EJB calls, doing JDBC access, or sending a JMS message
Java EE 5
  • "The focus of Java EE 5 is ease of development"
  • EJBs as regular Java objects (standard interface for inheritance)
  • Annotations vs. deployment descriptors (dependency injections)
  • Better default behavior and configuration
  • Simplified container manager persistence
  • Developer works less, container works more (app server)
Annotations
  • comments that guide code?
  • alternative to XDoclet
  • Syntax is to use '@' symbol
EJBs are POJOs
  • only a business interface to work with
  • XML descriptors replaced by annotations
It looks like if I just learn how J2EE 5 works, I can skip learning the older version, which is far harder/more complicated. EJB 3.0 requires dramatically less XML code to be written in order to work, and less Java code as well. Looks like this stuff isn't fully baked yet, however. :( See below:

Java EE 5 status
  • specs still under expert discussion
  • delivery date is targeted for 1Q 2006
  • Many areas ready: API simplification, Metadata via annotation, dependency injection, persistence
  • NetBeans 5.0 will be ready at same time
GlassFish Java EE 5 application server is some sort of new app server being written by Sun. Preview builds are available now? GlassFish purportedly works well with NetBeans. Plus, it looks like it is open source (which you know that I like).

Tools mandatory for J2EE 1.4 development. Features of Java EE 5 make development easier, and will be further assisted by smart tools. I knew that there was a new version of NetBeans out, which I was intending to check out at JavaOne. Also, it looks like there is a new NetBeans book out -- "NetBeans IDE Field Guide" which is good, because I don't like the documentation for NetBeans...
Posted by andyr at 04:58 PM | Comments (0)

JavaOne: new format

I'm going to pick up a bit of a new format. I'm going to try and blog about the parts of the presentation that are interesting to me, getting away from a full outline of the talk. I figure that the slides can probably be found online somewhere, so I'm going to focus on what my take-aways are.

-Andy.

Posted by andyr at 04:02 PM | Comments (0)

JavaOne: TS-7159: Java Platform Clustering: Present and Future

Research goal is transparent extension of Java programming model and facilities to a clustered environment. More for performance, than failure issues.

Design Approaches:
  • Cluster-aware JVMs (potentially optimal [close to hardware], but loses portability)
  • Compile to cluster-enabled layer (ex. DSM; good performance, but lose portability and impedence mismatch)
  • Systems using standard JVM (transforms at code or bytecode level). lose performance but get portability
Compile to DSM:
  • Hyperion - compiles to C code that is DSM aware
  • Jackal - violates Java memory model
Cluster-aware JVM software:
  • Java/DSM (Rice, 1997) - piggybacks on existing DSM, no JIT
  • cJVM (aka trusted JVM) (IBM, 1999) - proxy objects for non-local access (smart), no JIT
  • Kaffemik (Trinity college, 2001) - based on kaffe VM, scalable coherent interface, JIT
  • Jessica2 (2002) - JIT support, thread migration
  • dJVM (2002) - based on Jikes
Systems based on standard JVM:
  • JavaParty (1997) - pre-processor + runtime compiles to RMI. Requires langauage change
  • JSDM (2001) - supports SPMD apps, not full Java technology
  • JavaSplit - bytecode transformation, integrated custom DSM
  • J/Orchestra - application partitioning, bytecode transformation

The executive summary is that there is some work going on to make Java cluster-aware at the VM level. I'm not sure why Jini wasn't mentioned more, since it seems like a natural fit for clusters. If I ever need to do some clustering, I can check in on the above projects to see if there is a fit.
Posted by andyr at 03:59 PM | Comments (0)

JavaOne: TS-5163: XQuery for the Java Technology Geek

What is XQuery?
  • New language from W3C
  • Queries XML (documents, rdbms, etc.)
  • Anything with some structure
  • under development, not 1.0, at candidate recommendation stage
XQuery advantages over:
  • XSLT - easier to read and write, maintain; designed with DB optimization in mind
  • SQL - better for hierarchical data (things that don't fit: book data, medical records, yellow pages). DB is designed for columns of numbers.
  • Procedural - define what you want, let engines optimize
demo
  • When pulling data out of XML, easier to show more context around the data
  • Like breadcrumbs to book, chapter, section
  • Then show not only the search term, but also the content around it
Loading Content
  • depends on engine, indexed stores require pre-loading
Vendors
  • Mark Logic (presenters, demos available), eXist (OSS), Saxon
  • Coolest hidden XQuery implementation: Apple's Sherlock
XQuery uses XPath
  • a matching language to select portions of an XML document
  • Like RE engine for XML; "give me every one of these where that or this"
FLWOR expressions
  • pronounced "flower"
  • stand for: for, let, where, order by, return
  • this is one expression, not five
  • XQuery is technology to manipulate XML that you can find with XPath
  • XQuery doesn't have to produce XML output -- can produce sequence of elements, or just plain text
Deployment: CGI-style
  • Works well on web tier
  • Executes in response to HTTP requests like CGI
  • Speaks XML to back-end, XHTML to front end
  • advantage is easy-and fast, can do blog or searchable FAQ as XQuery (backend is XML, XQuery formats and displays on frontend)
Deployment: Direct Style (POJO)
  • Call XQuery stack from Java language
  • Think JDBC but for XQuery
  • Fits in your Java technology stack
Deployment: JSP Style
  • XQuery JSP tag library
  • send results straight out, or store in variables
In summary, I feel like I finally understand what XQuery and XPath are all about. This presentation gave a fantastic overview of both, in a way that was engaging and made the two complimentary technologies easy to understand. I don't know if these two technologies are directly relevant to my current work at EDS, but who knows what the future may bring. If you are working with structured data in XML, you need to check out XQuery and XPath.
Posted by andyr at 02:43 PM | Comments (0)

JavaOne: TS-7949: The New EJB 3.0 Persistence API

Got to this one late (lunch), so this is not full notes.

Extended Persistence Contexts
  • Rescue the stateful session bean from obscurity
  • natural cache of data that is relevant to a conversation
  • allows stateful components to maintain references to managed instances instead of detached instances
Conversations
  • A conversation takes place anytime a single user interaction spans more than one request
  • sometimes helpful to capture conversation in object(s), can optimize, manage lifecycle, etc.
To be quite frank, the content of this session bored me, so I exited early. The basic summary is that if you are doing hardcore EJB and database stuff, you may want to check this topic out in more detail.
Posted by andyr at 01:02 PM | Comments (0)

JavaOne: TS-7695: Spring Application Framework

Agile J2EE: where do we want to go?
  • Need to produce high quality apps, faster and at lower cost
  • cope with changing requirements (waterfall not an option)
  • need to simplify programming model (reduce complexity rather than hide with tools)
  • J2EE powerful, but complicated
Agile J2EE: Why important?
  • Survival: challenges from .NET, and PHP/Ruby at low end
  • Concerns that J2EE dev is slow and expensive
Why aren't we there yet?
  • Difficult to test traditional J2EE app
  • EJB's really tie code to runtime framework, hard to test w/o
  • Simply too much code, much is glue code (I concur, based on my experience with a J2EE portal in EDS)
  • Heavyweight runtime environment -- in order to test, need to deploy
Enter Lightweight Containers:
  • frameworks central to modern J2EE development
  • frameworks capture generic functionality, for solving common problems
  • J2EE out of box doesn't provide a complete programming model
  • Result is many in-house frameworks (expensive to maintain and develop, hard to share code)
Open Source Frameworks
  • Responsible for much of the innovation in the last 2-3 years
  • Several projects aim to simply development experience and remove excessive complexity from the developer's view
  • Easy to tell which ones are popular; driven by collective developer experience (refined, powerful, best-of-breed)
  • Tapping into "collective experience" of developers
How do Lightweight Containers work?
  • Inversion of Control/Dependency Injection (sophisticated configuration of POJOs)
  • Aspect Oriented Programming (AOP)
    • provide declarative services to POJOs
    • can get POJOs working with special features, JMX, etc.
What is Inversion of Control?
  • hollywood principle - don't call me, I'll call you
  • framework calls your code, not the reverse
What is Dependency Injection?
  • specialization of Inversion of Control
  • container injects dependencies into object instances using java methods
  • a.k.a. push configuration -- object gets configuration, doesn't know where it came from
  • decouples object from configuration from environment. Values get pushed in at runtime, so it is easy to run object in test or prod.
Why is Dependency Injection different?
  • configuration requires no container API
  • can use existing code that has no knowledge of container
  • your code can evolve independently of container code
  • easy to unit test (no JNDI stub, or properties files to monkey around with)
  • code is self-documenting
Advanced Dependency Injection:
  • shared instance, pooling
  • lifecycle tied to container objects
  • maps, sets, other complex types
  • type conversion with property editors
  • instantiation via factory methods
  • FactoryBean adds level of indirection; configured by framework, and returns objects based upon that configuration.
What is AOP?
  • paradigm for modularizing cross-cutting code (code that would otherwise be scattered across multiple places)
  • think about interception
  • callers invoke proxy, chain of interceptors decorate that call with additional functionality, callee finally invoked, and return passes back through chain
  • idea for transaction management and security
  • enabling technology, for defining own services, and applying them to POJOs.
Incremental AOP
  • think of it as a generic way similar to how EJB implemented
  • can be implemented with dynamic proxies
DI + AOP delivers POJO ideal
  • configure objects w/o invasive API
Spring Framework
  • OSS project, apache 2 license
  • Aims:
    • simplify J2EE development
    • provide a comprehensive solution to developing applications built on POJOs.
    • aims to address all sorts of applications, large banking, or small in-house
  • easy to work with JDBC, Hibernated, transaction management, etc.
  • consistency makes Spring more than sum of its parts.
  • don't need to deploy, can all run from within IDE if desired.
Posted by andyr at 01:01 PM | Comments (0)

JavaOne: TS-5958: Amazon Web Services

Terms:
  • AWS - Amazon Web Service
  • ASIN - Amazon Standard Item Number
  • Associate ID - pass this # into all AWS calls
  • REST - Representational State Transfer
Concept is the programmable website
  • support for industry standards
  • remote access to data and functionality
  • about getting direct access to guts of website
What is AWS?
  • APIs that give any developer outside of amazon programmatic access to Amazon's data and technology.
  • Includes product information, customer-created content, shopping cart, etc.
Why AWS?
  • Legitimize outside access, site scraping sucks
  • Third-party developers extend the Amazon platform
  • Harness creativity of others
Offering:
  • SOAP API
  • REST API
  • XSLT transformation service - can apply transform to XML results before returned. Can build website with no physical template, just supply XSLT stylesheet, in order to build "virtual website".
  • WSDL - documentation for schemas
  • Tons of documentation & community outreach
REST vs. SOAP:
  • SOAP is standard, strongly typed, requires toolkit
  • REST is convention, ad hoc, easy ramp-up, prototype in browser, really easy to use. Key-value pair based. Easy to script. Develop in browser.
  • REST is about 80% used, SOAP other 20%.
Getting started: Summary:
  • Amazon offers 3 different things via web services API
  • Easy to use via Java
  • WebServices is like an API for specific websites, allowing 3rd party developers to build new sorts of apps just like if you were to write an app for the Windows API.
Posted by andyr at 10:31 AM | Comments (0)

JavaOne 2005

After a too-short recovery time from NYC, I am in San Francisco today, attending Sun's JavaOne conference. I am going to be trying to blog about each session that I attend, and then cross-posting my public posts to the EDS blogosphere. So, for my non-computery readers (you know who you are), you're going to want to ignore the next like 3 days or so.

-Andy.

Posted by andyr at 09:48 AM | Comments (0)

June 23, 2005

Open Source vs. Commercial Source: where is it headed?

I read a great interview with Linus Torvalds the other day. The main thrust of the interview was questioning Linus as to where the Open Source vs. Commercial Source divide is ultimately headed. Pretty interesting stuff, and well worth a read.

I have been doing some thinking about this as well recently, as I try and evangelize Open Source at EDS. My thoughts are pretty similar to where Linus is at. Open Source is going to continue to commoditize certain things like OSes, browsers, and potentially even office suites. The key for Closed Source commercial vendors is going to be to stay one step ahead of the curve, and earn their revenue by innovating. People will pay in order to be at the cutting edge, the state of the art. And companies will pay for support. Those are the two spaces that I increasingly see commercial vendors playing in.

-Andy.

Posted by andyr at 11:21 AM | Comments (0)

May 19, 2005

Google is really cool

A couple of things about Google have been bouncing around in my head lately, and it all came together with something that I read on Slashdot today. Microsoft's CEO Steve Ballmer made slashdot today, with is prediction that Google is a one-trick Pony, and as such will be dead in 5 years. Last week, I read an article by Robert X. Cringely, stating that the Google Web Accelerator is a portent of how Google will become a "platform". Thankfully, I don't think that either point of view is exactly correct.

While, it's probably true that if Google just sticks to search, Microsoft will be able to do to them what they did to Netscape, I don't think that is Google's game. I think that Google is looking to be a repository for accessing data. And the "platform" (if you can call it that), will be their API's, which allow 3rd party applications to interact with and add value to this data in their own ways.

Case in point: this Wired news article that I read the other day. It highlights several new applications that are making use of Google Maps in new and interesting ways. One of the applications that immediately grabbed me is something called HousingMaps, which combines apartment listings from craigslist with mapping information from Google. Go ahead and try it out -- it is super neat. But the reason why this application reached out and grabbed me is because this is something I could have really used the last time that I was looking for an apartment. With one click, I saw all of the current craigslist apartment listings as pushpins on Google's map. This is so awesome! And it is all made possible by the fact that Google's "platform" is eminently hackable and extendable by third parties.

Of course, the one thing that Microsoft touts over and over is that they provide a platform -- i.e. Windows -- which is a rich ecosystem for 3rd party developers to build their own applications, thus allowing the free market to serve customers in a way that no monolithic entity can. Well, guess what kids? Google can play that game too. And while I don't want to over-hype this (because hyping some company as a Microsoft-killer is a sure way to get them killed by Microsoft), I sure am keenly interested to see where this is going.

-Andy.

Posted by andyr at 11:28 PM | Comments (6)

May 05, 2005

Finally!

iWork, my PowerBook, and Tiger

My copy of Tiger finally arrived today (iWork came yesterday). My initial analysis: Tiger fixes iSync not working with my crappy Nokia 6600 cell phone, so that is worth the price of admission right there.

-Andy.

Posted by andyr at 12:17 AM | Comments (0)

February 14, 2005

Comments disabled

I've temporarily disabled the ability to post new comments to all of the blogs hosted on redefine. The comment spam is getting pretty bad, and I need some time to regroup on a technical level, and come up with a different anti-spam solution other than a blacklist. I think that like Carl before me, I'm going to go with TypeKey. This appears to require MovableType 3.x, however, which requires both money, and time, since I can't use the FreeBSD ports collection to install it. Hmmm...

-Andy.

Posted by andyr at 12:20 AM

February 02, 2005

The Machine Marches On

Great stuff on wired.com today: "Hide Your IPod, Here Comes Bill". I read this article with a high degree of amusement. As the Microsoft machine marches on, taking over market after market, it is nice to see them stymied, as evidenced by their own employees. Microsoft employees tend to be a smart lot -- so if they are buying iPods in droves, then it seems like management should try and figure out why, instead of simply banning the practice.

From what I've read about the "PlaysForSure" program, it seems like Microsoft has solved a lot of the reasons why non-iPod mp3 players have sucked on Windows. So eventually, with this software in place, the non-iPods may start to take over the market (just like wintel PCs before them). But for right now, Microsoft has got nuthin'.

But meanwhile, the machine continues to march. I had a quick look at Microsoft's new "MSN Search" the other day, and at first glance, it appears to be a total Google rip-off -- at least from a UI perspective. It looks like the search results that it is returning still aren't as complete as Google's. But how long will it be before Microsoft can out-Google Google?

sigh.

-Andy.

Posted by andyr at 04:01 PM | Comments (2)

The T-Mobile Tango

I wanted to have good Internet access while traveling abroad, both to keep on top of work, but also to keep in touch with my friends and family (and TV). Based upon the information that I had from other EDS employees who had gone to Germany, T-Mobile WiFi HotSpots were plentiful, but expensive. In fact, it is 2 euros for every 15 minutes -- 8 euros an hour. Computing the exchange rate is left as an exercise to the reader -- but suffice it to say, this is quite expensive. I did some research, however, and found that accounts on the T-Mobile HotSpot system in the USA can be used on T-Mobile HotSpots in Europe. The advantage, of course, is that in America (being the gluttons that we are), you can buy an "all you can eat plan" for a flat monthly fee. So, before leaving for Germany, I added T-Mobile's HotSpot service to my cell phone plan.

My first week in Germany, I was staying at a hotel that didn't have T-Mobile. The Wifi in the hotel was served by Swisscom, and there was no roaming agreement between Swisscom and T-Mobile. So, I didn't really try to use the T-Mobile service in Europe until last Friday, when I was at Frankfurt airport, waiting to go to England. And of course, it didn't work.

Over the weekend in London, I tried it twice more (both times at Heathrow), and was not successful in getting my account to work. So, I returned to Germany, tired and frustrated by the fact that my T-Mobile HotSpot account wasn't working. My second week in Germany, I am staying at a different hotel which is served by T-Mobile. So, I spent an hour on Sunday evening on the phone with T-Mobile, trying to resolve the situation.

I think that T-Mobile is just like any multi-national company. From the outside, it looks like one homogenous entity. However, internally, due to regional laws and other political reasons, it is really many different sub-companies. The support website for the T-Mobile HotSpot in Germany listed two different phone numbers. In addition, the website advertises that the support personnel speaks German, English, and Turkish. When I called the first number, the person told me (in broken English) that the english-speaking support personnel are only in Monday through Friday.

So, at that point, I was skunked. But luckily, I picked up a T-Mobile brochure when I was in London. That had the support number for T-Mobile UK. I called them up, and the helpful scotsman who answered wasn't able to help me, but he was able to give me the phone number for T-Mobile HotSpot support in the USA. Once connect to T-Mobile USA, I found that my account was locked?

Why was it locked you might ask? Because I reported my cell phone lost, and asked that my account be on hold. When I did this, I assumed that they would lock the cell phone account, but leave the WiFi account. But no, that isn't how T-Mobile works. I have one account, and they have one giant lock, and that is how it goes. So, I had to establish a new, separate account that was WiFi-only, in order to get on the 'net. Sheesh.

The lesson: never lose your cell phone. It really sucks.

-Andy.

Posted by andyr at 09:44 AM | Comments (0)

January 11, 2005

The Mac mini power adapter

The Mac mini has an external power brick, unlike the iMac G5:

Still, not that big of a deal, considering how small the Mac mini is. It would be great if it used the same power supply as the PowerBook/iBook, but oh well.

-Andy.

Posted by andyr at 03:36 PM | Comments (1)

The Mac mini

DSC00898.JPG

Posted by andyr at 03:29 PM | Comments (1)

The Mac mini, underneath

I flipped one of the Mac mini's that Apple had display over, and got a picture that I haven't seen anywhere else yet:

DSC00899.JPG

The bottom appeared to be a solid chunk of metal, with the Apple logo etched into it. Sweet.

-Andy.

Posted by andyr at 03:29 PM | Comments (0)

The wall of iPod shuffles

Apple has a wall that runs along the side of their booth, devoted to the iPod shuffle:

DSC00913.JPG

Posted by andyr at 03:28 PM | Comments (0)

Me touching an iPod shuffle

DSC00911.JPG

Apple has really nailed this product. Again.

-Andy.

Posted by andyr at 03:28 PM | Comments (5)

January 10, 2005

Dang!

The first law of buying a computer is that as soon as you buy it, there will be something {faster, sexier, smaller, cheaper} for purchase (choose your own attribute). Well, I just saw this on Gizmodo. And it certainly looks slicker than the xPC that I just bought. Rats!

-Andy.

Posted by andyr at 09:26 PM | Comments (1)

My Shuttle SN95G5

After months of dithering, I finally bought the PC-of-my-media-center dreams:

DSC00862.JPG

This box is going to eventually house my TV capture card, and run Linux and MythTV, serving all of my personal video recorder needs. The hardware:

  • Shuttle SN95G5 Case and Motherboard
  • AMD Athlon64 3000+
  • AOpen NVidia GeForce FX5200 128Mb AGP Video Card
  • 1Gb of Corsair value RAM
  • Seagate 200Gb Serial ATA HD
  • Sony Dual-Layer DVD-/+RW CD/DVD Burner

I know that it is way more power than I need for a simple PVR, but I want it to be fast when I crunch video down to Mpeg4. I also want to rip DVDs with it. And run Seti@Home or something (since it has to be on all the time anyway). So, I splurged a bit.

For right now, I've got Windows XP on it, because there are a couple of games that I want to play, and I wanted to inaugurate this computer in style by killing the heck out of Kevin in Urban Terror. Also, I don't have time to mess with Linux right now (see the bit about Urban Terror). But when I get back from Germany, it is going to be on.

Once again, the gallery is here.
-Andy.

Posted by andyr at 12:15 AM | Comments (4)

December 15, 2004

Comment spam

So, we have been getting a fair amount of comment spam for the last several months. Once I installed Jay Allen's "MT-Blacklist", it has really only been annoying. When I got home from work today, however, I noticed that my machine was thrashing. It was working so hard, that the console was unresponsive. A reboot later, and I was back in control of the thing. Doing some initial investigation, it looked like somebody (or somebodies) was jamming on the comment system for the blogs that are hosted here. I disabled it quickly, so that I could get on with my life.

Later (after dinner & "The Daily Show"), I found that as soon as I re-enabled the "mt-comments.cgi" script, the box was immediately hammered again. I managed to narrow all of the spam traffic down to 4 IP addresses, being served by an ISP called SAVVIS. Looking in DNS, it looks like these IPs are being used by a company called "Marketscore". From their website, it is hard to tell if they are legitimate or not. For the time being, I have firewalled them off, and fired off an e-mail to the abuse department over at SAVVIS. But in 2005, I'm going to have to do two things:

  1. Come up with a better anti-spam solution for the blogs hosted here.
  2. Tune my FreeBSD machine -- because getting pounded with HTTP CGI requests shouldn't hork the box to the point that I can't login on the console.

-Andy.

Posted by andyr at 08:52 PM | Comments (2)

December 02, 2004

I really should have gone to bed a long time ago (but this is just too cool)

So, on MacNN today, I noticed a blurb about some instructions for compiling the MythTV Frontend on MacOS X. I had a hard time loading the page (I tried all day -- it was posted to some wiki that was overloaded), but finally managed to get a peek late this evening. I found the instructions for compiling it up, but that looked like a bunch of, well, work. Luckily, I also found a pre-compiled binary, and so I was off to the hacking races. I had to do some mysql hacking, and poke some holes in my DMZ firewall, but even after all of that, I was having issues.

It seems like MythTV stores information about the backend servers in the MySQL database. This information includes the IP address of the server. So, my mythfrontend on MacOS X was connecting to the mysql database on my myth box, and then trying to connect to the mythtv server ports (6543 and 6544) on the backend server. Unfortunately, when I configured mythtv, I was thinking only of the single-box case, and so it appears as if the backend server IP address that I configured is 127.0.0.1, not the real IP of the box. This means that mythfrontend running on my PowerBook was trying to connect to 127.0.0.1 in order to watch TV.

I don't really know how to fix this, but it probably involves changing some data in the database. Not something that I want to do on my PVR, while it is recording Badly Drawn Boy on Last Call with Carson Daly. So, what did I do? Why, I whipped up an SSH tunnel of course. But that's not the amazing part -- the amazing part is that it actually worked! I was able to stream an tonight's episode of The Daily Show, through SSH, over my 802.11g wireless, and watch it in realtime on my PowerBook.

This is really awesome. It means that I now have a wireless TV in my apartment (and it didn't cost me an arm and a leg!). Of course, I don't really need such a thing in my current one bedroom apartment -- but I can envision several uses when I'm back living with a roommate again.

-Andy.

Posted by andyr at 01:55 AM | Comments (2) | TrackBack

September 24, 2004

NetNewsWire 2.0: Finally!

So, I am glad that Ranchero Software finally released NetNewsWire 2.0, even if it is only a beta. I bought 1.0.8 about a month after I started blogging, and I was starting to get a little unhappy with it, because the software appeared to be stagnating. But I bought a copy so that I could encourage further development! But, my purchase has paid off, because 2.0 is awesome. It finally supports Atom feeds, which means that I can finally have Chris' blog polled from NetNewsWire. It has a new swanky tabbed interface for viewing HTML articles right in NetNewsWire (which is vastly superior to popping open new Safari windows). Plus, it seems like it is faster at going out and polling for new articles, which is quite welcome.

Those are the new features that have immediately jumped out at me. Well, there is one more thing -- I had numerous beefs with the built in blog editor, but I used it for posting to my blog anyway. In NetNewsWire 2.0, Ranchero has gone ahead and put this feature out of its misery, and removed it from the product. But am I mad?!? Heck no, because they have gone ahead and rolled out a dedicated blogging client, MarsEdit. I've been using it for the last several days, and so far I am pretty happy with it. It is already won me over with how easy it is to paste URLs (much faster than in the old client).

If you have a mac, I definitely recommend checking these two applications out.

-Andy.

Posted by andyr at 12:48 AM | Comments (0)

September 22, 2004

How the corporations screw 'ya

After doing some research on the 'net, I found that Fujitsu offers a 5 year warranty on their enterprise SCSI drives, which is pretty amazing. The drive in redefine that is failing was manufactured in March of 2000, so it is 4.5 years old. So, I called up Fujitsu today, to see what I could do about getting my drive repaired under warranty. The first piece of information that the nice Fujitsu woman asked me for was the model number of the drive. I give it to her (noticing that it ended in the letters "DL" as I said it), and upon hearing this, she immediately went into the song and dance -- "Did this come with a Dell computer?". To which I replied that it did, and she of course told me that I had to deal with Dell directly.

It seems that one of the ways in which Dell gets a discount on parts is to negotiate a lesser warranty with the manufacturer. They then turn around to me, the customer, and sell me an entire computer with a 1 year warranty, that I would need to pay to extend, even though if I were to buy the parts myself, individual ones may have longer warranties.

On a lark, I contacted Dell (I saw "on a lark" because I knew that my computer has long since been out of Dell's warranty), and the Dell representative told me that my computer was in fact out of warranty, and that I could look into buying a replacement part from Dell if I wanted.

I'm not really pissed off about any of this, I just find it interesting. It is also another case for building my own computer that I hadn't really considered before.


Another interesting thing that I learned when researching my soon-to-be-completely-dead disk is that while Fujitsu warrants the non-OEM version of the drive for 5 years, they go on to say that it was only designed to last for 5 years. Basically, after 5 years, any additional mileage that you get out of it is due to your own personal good fortune. I find that to be interesting for an enterprise-class device, which can oftentimes be in service for far longer than initially planned. It also makes me suspect of 10,000 RPM (and higher) drives. My gut tells me that the higher rotational speed of the platters hampers drive longevity. The 7,200 RPM IBM drive that I am using now as a backup was manufactured in September of 1998, and has been in continual operation since I have owned it, until June of this year. I think that IBM really knew how to make disks, once upon a time...


All of that being said, I am still running off of the suspect Fujitsu drive. Since fixing the bad sector, it seems to be performing okay. I beat it to hell today upgrading a whole bunch of ports, and I haven't seen any more SCSI errors. I think it is just a matter of time, however...

-Andy.

Posted by andyr at 12:54 AM | Comments (2)

September 21, 2004

redefine update

So, I did a bad sector check in the SCSI BIOS (Adaptec's SCSI chipsets are awesome), and the check found one bad sector on the 9Gb Fujitsu disk, which I told it to remap. The machine seems to be fine now, but bad sectors are indicative of pending drive failure. So, I'm going to have to come up with a long-term solution to this problem. For the time being, I have resurrected my old 4.5Gb IBM U2W SCSI disk, and slapped that in redefine. I've setup a cron that rsync's the relevant bits from the 9Gb disk over to the 4.5Gb, so I can boot off of that in an emergency. But I think that going forward, I need to come up with some sort of RAID solution, so that this machine can drop a disk, and I can wait until the weekend in order to deal with it.

But this caps a "bad computer day" for me. Not only did redefine have some issues, but towards the end of my work day today, a server that I was working on went south. A co-worker was doing a package install at the time, and we suspect that the package had something like "rm -rf $INSTALL_LOC/" in a post-install script. Of course, if the "$INSTALL_LOC" variable is null, then the shell will translate that command to "rm -rf /", which on any UNIX box (and Solaris in particular) is quite a bad thing to do.

sigh

-Andy.

Posted by andyr at 01:02 AM | Comments (1)

September 20, 2004

Awww, damnit

It looks like redefine's (the server that hosts this blog) SCSI disk is failing. This machine has a 9Gb U160 10k RPM SCSI drive as its primary boot and root partitions, and a 160Gb IDE disk serving as /home. I was messing around from work today, trying to update my ports collection, and the machine has been acting strange. A hit from the dmesg command shows a lot of messages like this:

<<<<<<<<<<<<<<<<< Dump Card State Ends >>>>>>>>>>>>>>>>>>
(da0:ahc0:0:0:0): SCB 0x6b - timed out
sg[0] - Addr 0xb184000 : Length 4096
sg[1] - Addr 0x7f85000 : Length 4096
sg[2] - Addr 0xd546000 : Length 4096
sg[3] - Addr 0xf127000 : Length 4096
(da0:ahc0:0:0:0): Queuing a BDR SCB
(da0:ahc0:0:0:0): Bus Device Reset Message Sent
ahc0: Timedout SCBs already complete. Interrupts may not be functioning.
(da0:ahc0:0:0:0): no longer in timeout, status = 34b
ahc0: Bus Device Reset on A:0. 5 SCBs aborted

Dang. Everybody who has data on this box should officially back it up, starting now.

-Andy.

Posted by andyr at 02:47 PM | Comments (1)

September 19, 2004

The spammers cometh

So, Mark noticed on Friday that the spammers have found redefine, and as a result, several of our blogs have been "crapflooded" -- i.e., the comments to our posts were filled with spam. Rushabh suggested that I install MT-Blacklist, which I did today before going into the city. So far, it has been useful in de-spamming my blog (deleting comments in the MT user interface is painful), but the jury is still out. The problem with any sort of blacklist is that you have to keep the definition file updated (which doesn't appear to be easy to automate), you can get false-positives, and the spammers can always be "one step ahead". Ultimately, I think that I may either just disable comments, or move to a forced-registration system, like what Carl is using now.

-Andy.

Posted by andyr at 03:22 AM | Comments (3)

September 02, 2004

Tivo, Schmivo

Every time I go to Chris's house, and see him using his Tivo, I want one. But every time, I find some reason (or 3) why Tivo just isn't right for me. Well, I was just at Chris's place at the end of July (for an entire week), and so he really had me on this whole Tivo thing. And then, last month, Tivo started offering some rebate thingy, which made it even more compelling.

But still, I resisted. I was going to write a long blog post about why I resisted, but instead, I'm going to write about the solution.

I have thought about just building my own Tivo-like device before, but I didn't think the end result would work well enough for me (I am a demanding TV user). But when I was up in Seattle, Fredrik told me that he built himself a MythTV box, and that it was working great. So, he totally sold me on it.

I've spent the last couple of nights surfing up PC hardware, because I don't really having anything suitable for integrating into my entertainment center. This whole project has kindof morphed into me buying a Cube PC, because I have always thought those things are cool. I would have bought one years ago, but I got into the whole Apple thing instead. Unfortunately, to assemble the Cube PC that I want would cost about $1k (when all is said and done). That is a little bit much for me to spend, considering I'm still not 100% sure that this is all going to work.

So, I decided to just go ahead and buy the cornerstone of the PVR, the TV capture card, and see if I could get it all working in my old dual Pentium-III 500Mhz machine. The card of choice amongst the Linux crowd is Hauppauge WinTV PVR-250. I saw over on Gizmodo that Circuit City is selling the thing with some massive rebates, so today I pulled the trigger on that. I had to go up to Hayward in order to pick the darn thing up, which was okay, because I got to throw Mike a bone.

So far, I have managed to install the card on my Windows XP partition, and in less than an hour, have it at a point where I could watch "The Daily Show". Over the long weekend, I will be installing Linux, and seeing if I can produce a workable prototype. I'll try it out for a few weeks, and if it seems like the whole thing is going to work, then I'll buy some sort of entertainment PC. This will also buy me some more time, so I can find the exact PC that I want.

It is gonna be great.

-Andy.

Posted by andyr at 12:42 AM | Comments (2)

September 01, 2004

That previous post

I am playing around with some new software that I downloaded, instead of going to bed (I have a cold -- I really should sleep). The software is called "Photon" by Daikini software. Photon purports to be some sort of application that makes it easy to post pictures from iPhoto onto a blog. The documentation on their website is non-existent, so it took me awhile to figure out how it works. After several test posts, however, it looks like I got it working.

The biggest drawback to this system is that while you can export a single photo from iPhoto to the blog, then you end up with an entry like the one that I just made. There is no text that surrounds the photo, to give it more description. I know that Carl very rarely writes verbiage to accompany his photos, but I'm not Carl. The second major drawback is that I like for the thumbnail of the image to itself be a hyper-link, which takes the reader to the larger version of the image. It doesn't look like Photon supports this method of doing things.

So, I'm not sure if I'm going to pay the $10 or not. I'll have to play around with it more, I guess.


So, about the picture? There really isn't any story -- on my last day in Seattle, I was hanging with Justin and Sarah, and transitioned to hanging with Rushabh, Kristen, and Ted. I didn't have too much time before I head to head to the airport, so I suggested that we check out the University of Washington campus. It happened to really work out, since Kristen is a student there, she was deputized as a tour-guide for our group. I can't remember which building that is, but the picture looked good, so I posted it.

-Andy.

Posted by andyr at 12:26 AM | Comments (0)

August 20, 2004

DSPAM

The EECS department at Case (where my e-mail is hosted) has recently added DSPAM to their mail servers, instead of SpamAssassin. The switch has been a little annoying -- it has forced me to figure out how to move messages between different IMAP folders in pine, which while I have figured out, is still too many keystrokes. The reason why messages have to be shifted around is because DSPAM is a learning system, similar to Mail.app's Junk system.

And of course, because DSPAM needs to be trained, it has really sucked at finding SPAM for the last couple of days. I think that it might be getting a bit better, but it is hard to say. I think what might be hurting it is that when I do use Apple's Mail.app, it plucks the SPAM out into its own folder, and thus as a result, DSPAM doesn't get trained. I'm going to have to research how to make these two kids play better together.

-Andy.

Posted by andyr at 12:03 AM | Comments (1)

August 08, 2004

277

When I was out at OSCON two weeks ago, I performed a little experiment. Since I knew that I was going to be using only one computer, my PowerBook, for the entire week, I decided that I would not delete any SPAM. Instead, I would let it all pile up in the "Junk" folder in Mail.app. I have been curious for awhile as to how much SPAM I'm actually getting, but it has been hard for me to track, because I am pretty fanatical about deleting it.

So, from Sunday the 23rd of July through Sunday August 1st, I didn't delete a single SPAM. And the total that I reached? A mere two hundred and seventy-seven messages. On the one hand, that is a lot of e-mail. It occurs to me as I write this that I should also have tracked the total number of e-mails that I received in that week, so as to determine the ratio of signal to noise in my inbox. But, one conclusion that I can reach is that I'm probably getting less SPAM then many other people out there on the 'net.

Oh, and the other conclusion is that SPAM sucks. But everybody knew that already, right?

-Andy.

Posted by andyr at 09:07 AM | Comments (0)

April 16, 2004

Hackery

So, let's say you have a bunch of DivX 5.0-encoded AVI files of a live concert. And you really like these files, and have them playing all of the time -- not so much so that you can watch, but so that you can listen to the music. Well, at that point, it sure would be a lot more convienant if these files were mp3 files, instead of DivX video files.

And let's further suppose that this very situation happened to a certain someone who owns this blog, and that he decided to hack his way out of it. This is what you might do:

ffmpeg -acodec copy -i Denali02-Blackcat-Apr2003.avi 02.mp3 -map 0.1:0

The 'ffmpeg' command is an open source project for recording, encoding, and slicing video and audio files. I had a vague notion of this program (I remembered installing the FreeBSD port as a dependency for something more interesting, like VLC I think). But a little googling brought be back to this program, and the above command line (applied to each of my video files) was exactly what I wanted.

I know that I could have used some program like "Audio Hijack" in order to get the raw audio, but then I would have had to re-compress it into mp3 format, and that would have been too lossy for my tastes. Instead, I wanted to simply demux the video files, stripping off the audio stream and saving it to a separate file. Which is exactly what the "-acode copy" flag did -- it specified that the audio codec to be used in the transcode should be a straight copy. The other bit of magic is the "-map" flag, which performs a one-to-one mapping from a stream in the input file to a stream in the output file. VLC said that the audio stream was stream #1, but according to ffmpeg, it was stream #0.1. Go figure.

But anyway, now I have my mp3 files, and they are in iTunes, and based on the lyrics, I am figuring out which song is which. Awesome!

-Andy.

Posted by andyr at 12:39 AM | Comments (1)

February 18, 2004

Windows XP, defeated

I managed to slay the twin dragons of Windows XP and productivity today, by getting my new Dell installed with a fresh copy of XP, activated by the key printed on the box. To Michael K.H. Au-Yeung, wherever you are, I take my hat off to you. I couldn't have done it without you.

And while I was at it today, I managed to get pathetically little "real" work done. Even better!

-Andy.

Posted by andyr at 12:18 AM | Comments (0)

February 17, 2004

One more thing

I'm pretty sure that my first Dell, that I bought back in '95, was assembled in the USA. I'm not claiming that all of which the computer was comprised were made in the USA. I'm just saying that final assembly was done in the US -- at least, that's what I remember.

While sticking my head inside the case of the machine that I got today, I noticed that a lot of components were made in China. Components like the case, power supply, cables, motherboard. And I got to thinkin' -- I bet this thing was assembled in China, based almost entirely on Chinese-built parts.

And then I thought about how times had changed.

And then I got one last whiff of the "new computer smell", and got to playing with my new toy.

-Andy.

Posted by andyr at 12:30 AM | Comments (2)

Windows Product Activation

So, I got my new machine at work today.

Let me just take a moment to digress -- finally! I have been waiting to get a new machine for like, ever. The other two people on my team got new machines last year (like around Octoberish) -- but did I get any love? Not by a long shot. But now I am in possession of a 3Ghz P4 with a cool Gig of ram (up from a PIII 866 that I just got up to 512Mb like, 6 weeks ago). Man, am I ever excited.

It's too bad that Windows XP decided to give me the old "once again".

The problem, in a nutshell, is that this Dell came from the factory with the stock corporate image on it. But for various reasons, I got it into my head that I didn't want said image on my desktop. So, I figured that I could just use my Windows XP CD to perform a fresh install, but use the Product Key that is on my new Dell in order to perform all of the activation procedures that appear to be a "necessary evil" these days.

But of course, Microsoft is on to me. They saw me coming from a country mile. It appears as if Microsoft is mastering several different "Windows XP Professional" CD images, with specific differences between OEM and Retail. So a Retail XP CD won't accept Product Keys that are for the OEM version, and vice-versa.

Luckily for me, I didn't get onto BitTorrent today, and I most-certainly did NOT download a Windows XP "8-in-1" ISO image. Which is all a very good thing, because it means that I won't be wasting my entire day fighting this battle tomorrow...

-Andy.

Posted by andyr at 12:26 AM | Comments (0)

February 10, 2004

Also, Amazon sucks

I ordered an iSight from Amazon last week Friday. My dad ordered one from the same company, on the same day.

He got his today. Amazon is projecting that I'll get mine by the middle of next week.

Bitches.

-Andy.

Posted by andyr at 12:35 AM | Comments (0)

February 07, 2004

Getting the old Samba beat-down -- Twice!

So, it was another really long day at work today. I spent the vast majority of it bashing my head against a problem that one of our NT SA's was having. Without getting mired in the boring details -- he was trying to image a server using ghost, and dump the resulting Gigs 'n Gigs of data onto one of our Samba servers.

Everything with his boot disk seemed fine, but when ghost got started, it died right away saying "not enough space on device for image headers", or some such crap. I checked to make sure I could create a file on the shared drive, and that the drive had plenty of space (check and check).

So, I thought that maybe it was a problem with some sort of file size limit, or something. I set out to find a copy of dd for DOS (so that I could run dd if=/dev/zero of=some_large_file.junk bs=1024 count=1048576). Basically, I wanted to see if I could write out a gig+ file in one crack. Of course, I couldn't find anything that ran in plain old DOS.

So, I set out to write a batch file that did much the same thing.

Much remembering, cursing, fighting, and debugging later, I finally had a script that reasonably approached what I wanted. I took it down to the server room, mentally preparing myself for a long wait as the computer wrote zero's to my test file. However, I was surprised when my batch file started printing an "out of disk space" error right after I started it. How big of a file did it write before the disk space errors started?

2,857 bytes.

Yes, that is it. A little more than 2Kb. Cripes. Did the shared volume have well over 2Kb free? Oh hell yes it did.

To make a long story even longer, after much debugging of boot disks, samba (even debug level 10 was now help), and voodoo later, I still don't know what's wrong. I came up with a work-around for NT guy (solution: use an NT box as the server), but I still don't know what's up with Samba.

Fast forward to this evening/morning, after all of the cards (and there were a lot of cards played) have finished. Kevin and I took on round 2 of his mission to be able to get at his code from the VPN. The sysadmins where he works have wisely configured the Samba server that he needs to disallow IP addresses associated with the VPN. Why? Because they want Kevin to learn about SSH tunneling and such so that he'll devise a work-around.

We beat up on windows enough that I got to the point where I was tunneling NetBIOS-in-TCP/IP-in-SSH in my test environment. But, once again -- I noticed some more strange Samba behavior. When I tried to connect through the tunnel, anything that allows a connection from guest works, but shares that require my username/password don't work! Argh!

I give up. Somehow, I just know that this is all Microsoft's fault...

-Andy.

Posted by andyr at 02:53 AM | Comments (4)

January 21, 2004

Have you ever noticed?

That in Safari, when working in a form "textarea" HTML element, switching tabs resets the cursor back to the top of the textarea?

That is very annoying! Especially when one is trying to edit their template file in Movable Type...

-Andy.

Posted by andyr at 11:52 PM | Comments (0)

So, this blog thing is taking off...

I setup Movable Type on my machine just to mess around. But, my friends started asking me for accounts (and then their friends), and this whole thing is starting to take off. Mark has an article up that has like over 5 comments! Wow!

Does this mean that I have to be a responsible admin-type-guy now?

-Andy.

Posted by andyr at 11:44 PM | Comments (2)

January 18, 2004

I am just not feeling this whole "Debian" thing

So (can I over-use that word any more?), without getting into too much detail, yesterday was spent trying to get some friends setup with Dreamweaver MX, so that they could publish content to Mike and Kevin's Linux box. This particular Linux box happens to be running Debian, thanks to the influence of "a certain Guju"...

Normally, I can deal with Linux in just about any flavor, but Debian is different enough to be giving me fits. After wasting a good chunk of yesterday fighting various Dreamweaver/SSH/ftp/NAT/tunneling issues, I decided that I would like to leave all of that in the dust by configuring the Linux box to act as a VPN server for the Windows VPN (PPP over L2TP) client.

Sounds simple enough, right?

Well, it took all day today, but it finally looks like I will have a kernel that both:
  1. Has the pre-requesite FreeS/WAN support in the kernel, and
  2. compiles to completion, without error.

Ug. So this means that I spent a lot of time fighting to get all of the requisite sources and packages on the box. And then fought trying to understand Debian's unique way of compiling the kernel (make-kpkg). And then, watched the compile fail in the FreeS/WAN "ipsec_init.c" code.

Much, much, much use of Google later, I decided to apply a patch to the "freeswan-kernel-patch" (patching the patch -- that is great). One hunk from that patch failed to apply, so I applied it by hand. Now things appear to be working -- of course, I say 'appear" because the kernel is still compiling (it has been at least 2.5 hours as of this writing). Granted, this box is a single-processor 400Mhz Celeron. But come on, my FreeBSD box has a comparable processor, and it takes about this long to do a whole "make buildworld"! I suspect that I didn't eliminate enough crap when I configured the kernel...

And after all of this, is IPSec going to work? Hell no, I still have to configure it, and fight through broken l2tpd daemons, and whatever all else isn't going to work right "out of the box".

And people wonder why Windows Server is gaining market share...

I suppose that I should mention, the specific things giving me trouble on Debian is the whole apt/dpkg thing. For example, I had no idea how to figure out which packages were installed on the box (the "rpm -qa" equivalent). Nor could I figure out how to determine which packages were even available for me to install. For example, "make menuconfig" failed in the kernel sources, because for some reason, this box didn't have ncurses. Well finally, a certain Guju supplied the command "apt-cache search <str>, which can be used to display a list of installable packages, with the names that "apt-get install" will understand. I'm still not sure how to print a list of packages already installed on the machine...

-Andy.
Posted by andyr at 11:36 PM | Comments (4)

January 16, 2004

A spark of life...

So, Mark made a comment on one of my posts, and MT e-mailed that to me today. So, it is starting to show some sparks of life. Rushabh mentined that I probably have a permissions problem on my /var/log/httpd-cgi.log (which should capture STDERR from cgi scripts), so I'm posting to see if I get any messages now.

-Andy.

Posted by andyr at 02:13 PM | Comments (1)

Boy, Coldplay is a great band

And this e-mail probably really sucks.

-Andy.

Posted by andyr at 12:22 AM | Comments (0)

So, now I'm hacking the Apache httpd.conf...

...that is never good, right?

-Andy.

Posted by andyr at 12:03 AM | Comments (2)

January 15, 2004

Still testing mail

I've set the blasted thing to "debug", and it is supposed to log to stderr now. Where is that going to go? I'm hoping to the apache error log...

-Andy.

Posted by andyr at 11:56 PM | Comments (0)

still testing

Now I'm playing with NetNewsWire, a MacOS X RSS/Weblog app. It's pretty schmansy.

-Andy.

Posted by andyr at 11:40 PM | Comments (0)

Faux Columns

It's a beginning CSS designer's nightmare and a frequently asked question at ALA: Multi-column CSS layouts can run into trouble when one of the columns stops short of its intended length. Here's a simple solution. [A List Apart]

Posted by andyr at 11:39 PM | Comments (0)

January 14, 2004

Testing the sending of mail

Rushabh is a very needy guy, and I should have gone to bed over 33 minutes ago. Blah.

At least I got out of work today just in time to celebrate Kevin's birthday. Go me. It's not like I worked more than 10 hours today... and it's not like Kevin turns 25 once in his life...

Posted by andyr at 12:35 AM | Comments (1)

January 08, 2004

Still testing, homes...

Did somebody run into a crowded room and shout "testing!"?

Posted by andyr at 12:13 AM | Comments (0)