So, it looks like Apple is finally introducing some new ways to purchase video from the iTunes Video Store: the Multi-Pass and the Season Pass. In theory, if all of my favorite shows were offered on iTunes, I could simply subscribe to the appropriate multi-pass or season pass, and download all of my shows to my Mac using my Internet connection. Then, there would be no need for Comcast, my current cable provider. In theory, this should cost me less money, because instead of paying for cable TV and Internet, I can just buy Internet. But, let's try and figure it out.
My cable bill for regular, non-digital cable is right around $50 a month. For that $50, my MythTV machine is recording 2 current-events style shows (the NBC Nightly News and The Daily Show with Jon Stewart), 4 hour-long dramas, and 5 half-hour sitcoms.
Now, let's do some math. I am paying $600 per year for cable TV, and out of that I am watching 11 shows. So, the cost-per-show per-year comes out to $54. In order for iTunes to be compelling, it needs to beat that number. Apple hasn't announced the pricing for a season pass yet, but we can assume that it will be less than $48 ($2 per episode times 24 episodes). So, things are starting to look pretty good.
Unfortunately, things like the Daily Show are going to be more expensive, because you get a lot more than 24 episodes of that in a year. And in fact, the multi-pass for the Daily Show is $10 per month.
So, my math has to get a little bit more complicated:
9 season-pass shows * $48/season = $432
2 multi-pass shows * $120/year = $240
Total for a year's worth of TV on iTunes = $672
Only $72 more than Comcast. That's not too bad, when you consider that:
Of course, the negatives are that the quality of the video isn't as good (the resolution is about a quarter the size of regular TV resolution) and the content won't be available until a day after it originally airs on broadcast TV. But I very rarely watch things in realtime anyway, so I don't think that will be too big of a deal for me.
So, I'm going to be watching the pricing for Apple's Season Pass content with interest. If it is really cheap, then I could potentially start saving some real money versus buying cable TV.
-Andy.
Technorati Tags: Apple, Macintosh, DVR, iTunes, Comcast, Video On Demand
So, in typical "guy" fashion, I left all of my christmas shopping until very nearly the last minute. I managed to get everything done during the course of a four hour shopping sprint today, which is great. What is not so great (from my wallet's perspective), is that I was bitten by the "impulse purchase fairy", and picked up one of those new-fangled Nokia 770's at CompUSA:
My initial impressions, after mucking around with it for a bit this evening, is that this little device is going to be a worthy investment. The screen is pretty amazing (as you can tell from how macnn.com looks), yet the device is super tiny and light weight.
Expect more nerdy ramblings as I play with my new toy.
-Andy.
Technorati Tags:
Nokia 770
So, to add complication to everything else that is going on, Kevin's and my DSL connection went down last night. It all started a few weeks ago, when Kevin got onto his latest kick, which is to try and do his own server/MythTV/Linux box. I told him that if we wanted to do his own Internet thing, he could buy a cheap Ethernet switch, and peel off his own slice of Internet before it goes into my server/router/NAT/firewall, redefine.
And of course, because Kevin does everything that I say, he went out and bought an Ethernet switch, to go with the new machine. With DSLExtreme (our ISP), we have like 8 static IP addresses. So last night (before heading out to the movies with Pratima and Kalpana), I went on DSLExtreme's website, and added a second static IP address to our account, for Kevin.
And that is when the trouble started.
I think something like 10 minutes after I added that second IP, our Internet was down. Of course I didn't notice for a few ours, since I was out. And by the time I did notice, it was late, so I just went to bed, hoping it would be all better when I work up.
Well, it wasn't. So I called tech support today, and they told me they would check the line, and call me back in 30 minutes.
Well, they never called back. So, approximately 12 hours later, I called them back, but this time, from Illinois. After the tech support guy spent some time messing around, he said that there was a problem with the router (which I surmised on my own), and that he would call me back in 30 minutes after he escalated the issue.
This time, he actually did call me back, but unfortunately, it was an instance of "good news/bad news". The good news was that he fixed the router. The bad news is that he changed my IP address. This is bad, because I don't have any out-of-band way to get at redefine, in order change the IP configuration of my machine.
But luckily, as it turns out, I do have out-of-band access to redefine -- Kevin. Thankfully, he was at home, and I walked him through re-configuring redefine, and so now I am back in business.
Woo-hoo! Geek tax paid. Thanks Kev.
-Andy.
After re-installing these packages, I found:
Fun!
-Andy.
So, ever since I bought my new iMac, this has brought "computer buying frenzy" to Sunnyvale. Kevin has upped the ante by purchasing two machines -- an iMac (for GarageBand), and a Sony of his owny (for Linux, Apache, and MythTV). The iMac hasn't arrived from Apple yet, so Kevin scooped up the Sony VGC-RC110G today, to start playing with that first:
In the Reitz family tradition, I made him take the machine apart before anything else happened with it. One of the reasons why Kevin chose this particular machine is because it is supposedly very quiet. Taking a look inside, this could certainly be the case. The 400W power supply has quite a large fan in it, which hopefully will spin at a lower RPM. The video card doesn't has only a heat sink (no fan), and the CPU has a heat pipe (potentially water cooled) combined with the biggest heatsink that I have ever seen (and I've seen the inside of the PowerMac G5). Sitting beside the heatsink is an even larger fan that what is in the power supply.
So, there is every possibility that this could be one quiet machine. I don't think it will be quieter than my iMac, but it will certainly be quieter than my Dell Precision Workstation 420 (which has at least one fan that is in some stage of going bad, so has been making quite an annoying racket for months now. But not annoying enough for me to fix it!).
Anyways, hardware-wise the Sony PC consists of an Intel 945P chipset (on an Intel-made motherboard), a Pentium D 830 (dual core Pentium 4 running at 3.0Ghz), 1Gb of RAM, and a 250Gb SATA disk. The machine also includes an ATI X300 PCI-Express video card (which can probably be made to work in Linux), and a Sony "Giga Pocket" video capture card. This card doesn't appear to be supported under Linux, but I found that it has a Conexant CX23416-22 chip on it, so getting it to work under Linux might just be a possibility. I am encouraging Kevin to work with the folks behind the ivtv project to see if this can be made a reality. I think it would be a nice story to take a piece of hardware that initially only worked with Sony's proprietary TV capture software, and now has been expanded to work with Windows Media Center Edition, work in Linux.
Expect more updates on that, and Kevin's general progress with this machine, in the coming months. For now, there is a gallery of photos available for your enjoyment.
-Andy.
Late last week, I saw a link called "digg vs. slashdot" posted to O'Reilly Radar. Curious, I skimmed the article, and then checked out digg. At first glance, it didn't really grab me -- but then I read the "about" blurb, and found that is like Slashdot + Wiki, and got intrigued.
O'Reilly has also come through with a link to a BusinessWeek article about digg, and I'm sold. I have been pretty unhappy with Slashdot for awhile now -- duplicate postings, low signal-to-noise ratio in the comments, etc. I have never even bothered to get a Slashdot user account, because I just don't see the point. I have never bothered to add the site to my RSS reader, and I have gotten down to checking it only a few times a week.
But all along, the basic problem with Slashdot hasn't been the site itself -- but rather it's editorial approach. And it is really looking like digg is fixing that, in a social software, "let's harness the collective intelligence of everyone", sort of way.
Which I really like. So, check out digg!
-Andy.
Whenever I read an article like this, I find it to be really distressing. I think that if the MPAA were to succeed in all of their goals -- trying to consume content would become not only expensive, but annoying as well. I think that the more the content conglomerates try to crack down, the more people are either going to either:
The optimist in me hopes that we'll have this sort of cheap and fair-use friendly content in the future, but currently, my inner pessimist is winning out.
-Andy.
A few weeks ago, one of the high priced Tibco consultants that we had in the office mentioned that it was possible to obtain an SSH client for my Nokia 6600 phone. At the time, I didn't do anything about it. But I had some free time tonight, so I decided to do some research. And in fact, he was right -- some crazy folks have ported PuTTY to the Symbian OS, which is what my phone runs.
Behold!
Because I am buying the all-you-can-eat GPRS Internet plan from T-Mobile, I was able to SSH directly from my phone, through T-Mobile's network, through the Internet, to my FreeBSD machine that was about 20 feet away. That's sweet!
The screen and keyboard (specifically, the lack thereof) make this whole thing rather impractical. But I was able to run top, and even my most-favorite of all text editors, joe.
So, I think I have definitely scored myself a little geek toy that I can show off in the appropriate settings.
-Andy.
So, I picked up a spare desktop recently at work, and I have finally wound down enough of current development/deadlilne stuff to play with it. I decided to install FreeBSD on this machine, to serve as my sandbox for playing with hot new Open Source softwares (like Wikis, social bookmarking apps, etc.). My sub-goal, however, is to play with FreeBSD. My personal webserver in my apartment has been running FreeBSD 4.x for years, but I have kindof "lost track" of current developments in FreeBSD.
To that end, I installed FreeBSD 6.0 beta 4 (which I had on CD), and used cvsup and "make world" to upgrade to the just-released FreeBSD 6.0 RC1. So far, things have been quite smooth. FreeBSD detected all of the hardware, and for being not-yet-finished, things seem as stable and polished as ever.
Due to the fact that Ubuntu Breezy came out the other day, I configured X and Firefox on the FreeBSD machine, so that I could use it today while my main Ubuntu desktop was upgrading itself. The only problem that I had was convincing my Logitech USB trackball to make the "scroll" button emit a "middle click" in FreeBSD. This works like a charm on Ubuntu, but with FreeBSD, I had to do some hacking. I tracked the mouse management stuff to a daemon called "moused(8)". It seems like the problem that I had was that by default, my mouse was emitting button 4, but I wanted it to emit button 2 (middle mouse button). So, I found the -m option in the man page, which looked like it would do what I wanted:
-m N=M Assign the physical button M to the logical button N. You may specify as many instances of this option as you like. More than one physical button may be assigned to a logical button at the same time. In this case the logical button will be down, if either of the assigned physical buttons is held down. Do not put space around `='.
Unfortunately, the first 20 or so times that I tried this option, I couldn't get it to work right. The mouse buttons either didn't behave properly, or my button 4 didn't emulate the middle mouse button. Finally, after much struggling, I re-read that passage very carefully. The option is N=M, but the text immediately following that talks about assigning M to N. Confusing!
But, I made the proper adjustments, and now all is well. I used FreeBSD running xfce4 all day today, and pleasantly enjoyed the experience. We'll see what Ubunty Breezy holds in store for me on Monday.
-Andy.
Without a doubt, the most frequently asked question that I receive when I evangelize blogging and RSS at work is "What is the best RSS reader for Windows?". Unfortunately for me, I do not have a good answer to this question, as all but stopped using Windows on a daily basis.
But, since 99.8% of EDS employees use Windows, and because I really want to promote blogging and RSS, I would like to have an answer to this question. So, with Google and Virtual PC by my side, I am going to list some popular Windows-based RSS readers in this post. Hopefully, this list, combined with the comments, will help me to arrive at an answer.
When I google for "best windows rss reader", I found the following:
Based upon the above research, it looks like the top RSS readers for Windows are SharpReader, FeedReader, and FeedDemon.
Well, I was hoping to be able to try a few of these out on Virtual PC, but I have spent an hour patching and installing the .NET 1.1 runtime on Windows 2000. So, I ask the Internet -- which of these readers is worthy of my recommendation?
-Andy.
The comments should explain some of the options. But the whole '-P' flag was added by logadm, after it ran. This flag tells logadm when it last successfully rotated the log file, so that it knows when it should rotate again. Kindof a nifty hack for performing that function, if you ask me.# -C -> keep 5 old versions around # -e -> e-mail errors to areitz@aops-eds.com # -p -> rotate the file every day # -z -> compress the .1 - .5 files using gzip. this means we don't need to # sleep before gzip. # -a -> gracefully restart apache after rotation /var/apache/logs/access_log -C 5 -P 'Fri Sep 16 00:24:48 2005' -a \ '/usr/apache/bin/apachectl graceful' -e areitz@aops-eds.com -p 1d -z 1 \ -R '/usr/local/bin/analog' /var/apache/logs/error_log -C 5 -P 'Fri Sep 16 00:24:48 2005' -a \ '/usr/apache/bin/apachectl graceful' -e areitz@aops-eds.com -p 1d -z 1 \ -R '/usr/local/bin/analog'
It was our sysadmin's last day with EDS today, and as a result, I now have a systems administration aspect to my job. This means that lately I have noodling around with Sun hardware and software more than I usually do. What follows is the story of how I spent a good chunk of my afternoon:
Due to our local Jumpstart process not properly partitioning the two disks in the E220R that I was trying to build today, I was forced to take matters into my own hands, and fix things manually. Since I'm a hacker at heart, this didn't pose too much of a problem for me. What did pose a problem, however, is getting the E220R box into a position where I could perform surgery on the partition table. Even in single user mode, Solaris refused to unmount /var. Thus, I was forced to find some way to get a non-local version of Solaris running, so that I could perform my surgery.
To my knowledge, Sun doesn't ship any sort of Solaris recovery CD, not even on Solaris 10. Doing a quick Google, I found a few brave souls who have posted instructions for creating your own Solaris recovery CD, but I have things to do, and don't have the time necessary to craft my own CD from scratch. The trick that I know is to boot off of the Solaris install CD, and then break the install process before it gets very far. This can usually net you some sort of shell, which is usually mostly-sortof functional.
When I tried to do this today with a Solaris 10 CD, I found that once the installer started, it mucked with the TTY to the point that when I managed to break it, I couldn't see any characters that I typed, etc. In general, the shell that I got wasn't usable at all. So I tried again, and this time managed to break into the startup sequence before the installer launched, which provided a rather functional shell.
It really seems like Sun should make this easier, however, by providing some sort of bootable recovery CD. This is one of the "rough edges" that Solaris still carries with it, that the Open Source UNIXes have mostly smoothed over. Fortunately, because Sun has given Solaris the Open Source treatment, Sun doesn't necessarily have to provide such a CD -- the community could step up and do it. Another of the advantages of Open Source.
Anyways, after getting my E220R booted off of the CD, I was able to hack the partition table on the boot disk, run newfs, and have a machine with a preserved root partition, but enlarged swap and more importantly, enlarged /var. Mission accomplished, but only after considerable effort.
-Andy.
By virtue of our IT person announcing that he is leaving EDS, my responsibilities are expanding. While I have prior experience with systems administration, and I have been dabbling in that space while at EDS, I think I'm actually going to have to get more serious about it now.
To get myself acquainted with a Dell PowerEdge 1750 server that we have, I decided to install FreeBSD on it. Seeing as how I haven't been on the "bleeding edge" of FreeBSD for quite awhile now (my home machine is still on FreeBSD 4.10+), I decided to give FreeBSD 6.0 Beta 4 a whirl.
I'm pleased to say that so far, it has been great. The install was a snap (well, mostly because they are still using sysinstall, which I have used many times in the past). All of the server hardware was automatically detected, including the Ethernet adapter, the built-in LSI SCSI Raid, and the dual Xeon processors. In fact, it appears as if SMP is finally enabled in the generic kernel, so I didn't have to re-compile in order to enable the second CPU (that's hot).
Unfortunately, it doesn't look like I'll be able to roll FreeBSD into production -- nobody else on my team has ever touched FreeBSD, and I'm not getting the "eagerness to learn" vibe. So, my options are either Solaris/x86 or Linux, and I think I'm going to take the Solaris/x86 route. But, in the meantime, I'm going to try and play with the new FreeBSD as much as I can. When 6.0 ships, I'm going to have to take a serious look at making the jump on my home server.
-Andy.
...I was forced to reboot my FreeBSD machine today. For the last several months, Kevin and I (mostly Kevin) have been noticing some terrible latency on our DSL connection, as high as 17,000 ms for the first hop. In the past, I have found that reseting our Westell CPE has fixed the problem. But when it happened again yesterday, the CPE reboot trick didn't fix the problem.
This really screwed Kevin over, who uses Microsoft Remote Desktop frequently, and planned to work last night. It was a mild annoyance to me -- with 17,000 ms latency, the web and e-mail were basically unusable. I tried calling tech support last night, but the 24 hour help line was closed.
So, when I woke up this morning, the first thing that I did was to call. Because I knew, that if I didn't fix this thing, I would have an all-out riot on my hands (from Kevin). After fighting with the technical support person for the better part of 40 minutes, we came to an impasse. He said that everything was working fine in the network, and I was saying that my FreeBSD / MacOS X combo was definitely not to blame.
I went to work with things still broken, and resolved to come home "earlyish", and hook my Mac directly to the DSL modem, and call again. That would be a scenario that is much more understandable for tech support, and then I could get some resolution. When I got home, I checked that the latency of the DSL link was still through the roof (it was). So, I hooked my Mac up, and checked again. And I'll be damned if things weren't fine. I surfed around, did a speed test, everything -- the thing was performing like a champ.
My faith in life shaken, I came to realize that my FreeBSD box was causing the problem. So, because I didn't have time to troubleshoot any longer, I rebooted it. And that fixed it. Argh!
The Geek Tax having been paid, I am hoping that this is not a sign of impending hardware failure. Or the apocalypse. Either of those two would seriously put me out, anyway.
-Andy.
I, along with the rest of my team at work, am attending Java WebServices training at a Sun facility all this week. At my workstation, there is an old Sun Ultra 10 and a Dell Precision Workstation 210. One of the computers is loaded with Windows 2000 Server, the other with Solaris 9 (you can guess which is which). I found that I couldn't login to the Windows server, so today I decided to have some fun. I brought a Ubuntu Linux live CD in with me, and managed to get the Dell running Linux.
Unfortunately, when Linux booted, I found that the network wasn't working. It appeared as if Sun wasn't running a DHCP server for the lab -- which was confirmed when Chirag plugged in his laptop looking for network. Looking at the Sparc box, I found that it was statically configured. So, I ping'd for a free address, and gave the Ubuntu box an IP on the same VLAN. But no dice -- Sun apparently has separated the Solaris and Windows boxen on different VLANs.
My next trick was to run tcpdump. Usually, by analyzing the broadcast traffic, you can sortof figure out what network the machine is on, and what the default gateway is. From there, you can pick an IP, and be on your way. Unfortunately, I was able to see broadcast traffic from quite a few networks, so it wasn't plainly obvious which network was "the one" for me. I did some trial and error, but I didn't get lucky.
So, the only way in which I could see was to somehow figure out what IP address the Windows install was configured with, and then re-use that IP on Linux. And since I couldn't login to Windows, the only way I could think would be to mount the NTFS partition on Linux, and then munge through the registry until I found what I was looking for.
And believe it or not, that is exactly what I did.
I found this MS document which explains all of the registry entries that MS uses for the TCP/IP stack in Windows 2000. Unfortunately, that document isn't 100% complete -- it focuses more on the "tunables" in the stack. However, it references a whitepaper, which had the details of where things like static IP addresses are stored in the registry.
With that information in place, all I needed to know is which file on disk houses the "HKEY_LOCAL_MACHINE" registry hive. This page told me where that file is backed up, which gave me a clue as to what I should search for on disk. In short order, I was poking around the "%SystemRoot\WINNT\system32\config\system" file. The Ubuntu Live CD doesn't appear to contain any sort of fancy hex editor, so I just used xxd, which I piped into less. I was able to search around in that output, until I found what I wanted, and got the Ubnuntu box onto the network.
In general, this sort of hacking that I didn't isn't all that novel. In fact, there is a book out now, called "Knoppix Hacks" (O'Reilly), which details similar sorts of hacks that can be done from Linux. But, I am glad to have stumbled onto my own such hack, because now I get to play with Ubuntu during training. :)
-Andy.
Last week, Rushabh started poking me about enabling SSL on redefine's webserver, so that we could post to our blogs securely. This has been on my TODO list for awhile, so I decided to start down this long, dark road on Saturday. After decoding much of the SSL certificate generation and Apache configuration crap that I needed to go through, I found out that the version of Apache that I was running didn't have SSL support compiled into it.
Drat.
So today, I uninstalled my old apache, and installed a new one that had mod_ssl compiled in. At first, everything was going swimmingly. I got Apache to agree that my new SSL-enabled config file was okay, and then restarted it. All was well, but SSL didn't work. I found that I had to use the 'startssl' instead of the 'start' parameter. And of course, after I figured that out, all hell broke loose.
To make a long story short, first apache wouldn't start. Some googling told me that mod_ssl rejiggers Apache's internal API, requiring all modules to be re-compiled. Great. After a tense half hour comprised of a lot of hacking (and apache getting random bus errors later), I managed to recompile all of the PHP crap, and now things appear to be stable.
whew
-Andy.
The focus of this talk is on improving incident management, more for resource failures than end-user requests. Focus of this talk is on SIM. In the past, auto-generated tickets haven't been correlated, and there has been duplicate tickets submitted by users. Technology of Event Manager and Help Desk is advanced that it is worth another shot. Help Desk has more automation capabilities, Event Managers more dynamic.
Central issue is that alerts say what physical resource is broken, not what service is affected. Not possible to automatically notify users.
Solution: In the CS tradition, insert another layer in between EM and Help Desk. This is SIM (Service Impact Manager). Event Management can reduce event flow (filtering, duplicate detection, enrichment, etc.). Correlation not required by SIM model. Needs work to define service model -- can use discovery to determine infrastructure & some config/topology, but need to define actual user-preceived services by hand. Can do master/child tickets automatically. List of services affected in ticket can be dynamic (as additional services go down or get fixed).
IDEA: event suppression? Change tickets that you cut in HD could have CI information in them, and that could then flow into EM, to automatically suppress alerts during change.
My summary: The idea of a SIM seems like a reasonable one. I didn't get a lot of details about BMC's product, so I can't say if that is something that I would want to see in our environment or not. But I think that there is a lot of potential in the EM/SIM/HD combo for doing automation (which is my bread and butter at EDS).
* original app -> vendor new version -> |-> my updates to app -> NEW MERGED VERSION
I am attending the last day of the Remedy User Group (RUG) conference today. Much like I did for JavaOne, I plan to blog about each session that I attend. So, to all of my non-nerd readers: you have been warned.
-Andy.
So, I've been using iPhoto to manage my pictures ever since I got my first mac. And while I'm not always happy with it, iPhoto does allow me to at least keep track of the pictures that I'm taking with a minimal amount of effort. iPhoto really falls down when you want to export your photos to the web. I don't have .Mac, so the only other option is some canned HTML that looks kindof funky.
So, I have been using Gallery to fulfill my pictures-on-the-web needs for some time now. However, one pain point has been getting my photos from iPhoto into Gallery. Basically, I have been doing a lot of manual effort, which has consisted of exporting pictures from iPhoto, scp'ing them to my server, then manually importing them into Gallery. The whole process is slow, repetitive, and generally sucks.
I had been thinking about trying to make things easier via Automator, when I stumbled across the free iPhotoToGallery software. This software does exactly what I want -- it provides an easy-to-use interface for exporting my photos directly from iPhoto to Gallery, without any of the annoying pain in-between. It seems like this software is a little rough around the edges, but so far, it has been working for me.
To celebrate, I have posted two new galleries of pictures: July 4th pictures from Chicago, and pictures from my trip to Antioch last Friday.
-Andy.
I spent some time vendor pavilion at JavaOne today. One of the vendors that I spent some time pick on was Sun. In particular, I had hardware on the brain toady, and I managed to track down like, the one person at Sun's booth, who could speak about the present and future of the UltraSparc line. In addition, this fellow was there to talk about the V20z, so I asked him a few questions about that as well.
Regarding the UltraSparc, both the UltraSparc IV and UltraSparc IIIi are currently shipping. The UltraSparc IV has both CMT (Chip Multi-Threading, sort of like Intel's Hyperthreading, but better according to Sun) and multiple cores per CPU. It sounds pretty hot. Unfortunately, it seems like it is only going to appear in Sun's higher-end servers (5U and up), in high densities. The Sun person that I spoke to wasn't sure if it would ever materialize in a workstation, but was doubtful.
The UltraSparc IV is going after highly-parallelized workloads, as is the rest of the industry. However, my group at EDS is working with some applications that are stuck on Sparc, and aren't highly parallel. So, it seems like we're going to be using the UltraSparc III series for awhile. The good news is that the UltraSparc III is up to 1.6Ghz in speed now, which is not too shabby (for a RISC CPU).
Moving on to the V20z, I cut right to the chase on this one. i knew that the only reason to buy an X86-based server from Sun would be for the management features. Luckily, I was not disappointed. The V20z has two Ethernet interfaces for the purposes of management. Even better, once you configure an IP on the Ethernet, you can SSH to it, and get full access to the serial console (OS), or the internal management console! That sold me right there. In addition, the management Ethernet ports can act as a hub, which means that you can daisy chain a rack of servers together, and only take up one switch port for management. That is really, really cool. One thing is that you have to use crossover cables, because the management ports don't support auto MDI-X (while the main GigE interfaces do).
I don't really keep up with the state-of-the-art for PC servers, but I don't think that you can do SSH management of them. Sun is definitely kicking ass here.
-Andy.
Two things that suck about JavaOne: it is nearly impossible to find power outlets for my PowerBook, and not all of the rooms have WiFi. OSCON puts JavaOne to shame -- nearly every room has a giant block of power strips, for the geeks to plug their laptops in. It is deplorable that JavaOne doesn't have this.
As for the lack of WiFi, I'm finding myself a bit surprised by this one. I'm finding that since I am blogging the conference this year, having the web available is necessary in order for me to sprinkle links into my posts.
-Andy.
I'm going to pick up a bit of a new format. I'm going to try and blog about the parts of the presentation that are interesting to me, getting away from a full outline of the talk. I figure that the slides can probably be found online somewhere, so I'm going to focus on what my take-aways are.
-Andy.
After a too-short recovery time from NYC, I am in San Francisco today, attending Sun's JavaOne conference. I am going to be trying to blog about each session that I attend, and then cross-posting my public posts to the EDS blogosphere. So, for my non-computery readers (you know who you are), you're going to want to ignore the next like 3 days or so.
-Andy.
I read a great interview with Linus Torvalds the other day. The main thrust of the interview was questioning Linus as to where the Open Source vs. Commercial Source divide is ultimately headed. Pretty interesting stuff, and well worth a read.
I have been doing some thinking about this as well recently, as I try and evangelize Open Source at EDS. My thoughts are pretty similar to where Linus is at. Open Source is going to continue to commoditize certain things like OSes, browsers, and potentially even office suites. The key for Closed Source commercial vendors is going to be to stay one step ahead of the curve, and earn their revenue by innovating. People will pay in order to be at the cutting edge, the state of the art. And companies will pay for support. Those are the two spaces that I increasingly see commercial vendors playing in.
-Andy.
A couple of things about Google have been bouncing around in my head lately, and it all came together with something that I read on Slashdot today. Microsoft's CEO Steve Ballmer made slashdot today, with is prediction that Google is a one-trick Pony, and as such will be dead in 5 years. Last week, I read an article by Robert X. Cringely, stating that the Google Web Accelerator is a portent of how Google will become a "platform". Thankfully, I don't think that either point of view is exactly correct.
While, it's probably true that if Google just sticks to search, Microsoft will be able to do to them what they did to Netscape, I don't think that is Google's game. I think that Google is looking to be a repository for accessing data. And the "platform" (if you can call it that), will be their API's, which allow 3rd party applications to interact with and add value to this data in their own ways.
Case in point: this Wired news article that I read the other day. It highlights several new applications that are making use of Google Maps in new and interesting ways. One of the applications that immediately grabbed me is something called HousingMaps, which combines apartment listings from craigslist with mapping information from Google. Go ahead and try it out -- it is super neat. But the reason why this application reached out and grabbed me is because this is something I could have really used the last time that I was looking for an apartment. With one click, I saw all of the current craigslist apartment listings as pushpins on Google's map. This is so awesome! And it is all made possible by the fact that Google's "platform" is eminently hackable and extendable by third parties.
Of course, the one thing that Microsoft touts over and over is that they provide a platform -- i.e. Windows -- which is a rich ecosystem for 3rd party developers to build their own applications, thus allowing the free market to serve customers in a way that no monolithic entity can. Well, guess what kids? Google can play that game too. And while I don't want to over-hype this (because hyping some company as a Microsoft-killer is a sure way to get them killed by Microsoft), I sure am keenly interested to see where this is going.
-Andy.
My copy of Tiger finally arrived today (iWork came yesterday). My initial analysis: Tiger fixes iSync not working with my crappy Nokia 6600 cell phone, so that is worth the price of admission right there.
-Andy.
I've temporarily disabled the ability to post new comments to all of the blogs hosted on redefine. The comment spam is getting pretty bad, and I need some time to regroup on a technical level, and come up with a different anti-spam solution other than a blacklist. I think that like Carl before me, I'm going to go with TypeKey. This appears to require MovableType 3.x, however, which requires both money, and time, since I can't use the FreeBSD ports collection to install it. Hmmm...
-Andy.
Great stuff on wired.com today: "Hide Your IPod, Here Comes Bill". I read this article with a high degree of amusement. As the Microsoft machine marches on, taking over market after market, it is nice to see them stymied, as evidenced by their own employees. Microsoft employees tend to be a smart lot -- so if they are buying iPods in droves, then it seems like management should try and figure out why, instead of simply banning the practice.
From what I've read about the "PlaysForSure" program, it seems like Microsoft has solved a lot of the reasons why non-iPod mp3 players have sucked on Windows. So eventually, with this software in place, the non-iPods may start to take over the market (just like wintel PCs before them). But for right now, Microsoft has got nuthin'.
But meanwhile, the machine continues to march. I had a quick look at Microsoft's new "MSN Search" the other day, and at first glance, it appears to be a total Google rip-off -- at least from a UI perspective. It looks like the search results that it is returning still aren't as complete as Google's. But how long will it be before Microsoft can out-Google Google?
sigh.
-Andy.
I wanted to have good Internet access while traveling abroad, both to keep on top of work, but also to keep in touch with my friends and family (and TV). Based upon the information that I had from other EDS employees who had gone to Germany, T-Mobile WiFi HotSpots were plentiful, but expensive. In fact, it is 2 euros for every 15 minutes -- 8 euros an hour. Computing the exchange rate is left as an exercise to the reader -- but suffice it to say, this is quite expensive. I did some research, however, and found that accounts on the T-Mobile HotSpot system in the USA can be used on T-Mobile HotSpots in Europe. The advantage, of course, is that in America (being the gluttons that we are), you can buy an "all you can eat plan" for a flat monthly fee. So, before leaving for Germany, I added T-Mobile's HotSpot service to my cell phone plan.
My first week in Germany, I was staying at a hotel that didn't have T-Mobile. The Wifi in the hotel was served by Swisscom, and there was no roaming agreement between Swisscom and T-Mobile. So, I didn't really try to use the T-Mobile service in Europe until last Friday, when I was at Frankfurt airport, waiting to go to England. And of course, it didn't work.
Over the weekend in London, I tried it twice more (both times at Heathrow), and was not successful in getting my account to work. So, I returned to Germany, tired and frustrated by the fact that my T-Mobile HotSpot account wasn't working. My second week in Germany, I am staying at a different hotel which is served by T-Mobile. So, I spent an hour on Sunday evening on the phone with T-Mobile, trying to resolve the situation.
I think that T-Mobile is just like any multi-national company. From the outside, it looks like one homogenous entity. However, internally, due to regional laws and other political reasons, it is really many different sub-companies. The support website for the T-Mobile HotSpot in Germany listed two different phone numbers. In addition, the website advertises that the support personnel speaks German, English, and Turkish. When I called the first number, the person told me (in broken English) that the english-speaking support personnel are only in Monday through Friday.
So, at that point, I was skunked. But luckily, I picked up a T-Mobile brochure when I was in London. That had the support number for T-Mobile UK. I called them up, and the helpful scotsman who answered wasn't able to help me, but he was able to give me the phone number for T-Mobile HotSpot support in the USA. Once connect to T-Mobile USA, I found that my account was locked?
Why was it locked you might ask? Because I reported my cell phone lost, and asked that my account be on hold. When I did this, I assumed that they would lock the cell phone account, but leave the WiFi account. But no, that isn't how T-Mobile works. I have one account, and they have one giant lock, and that is how it goes. So, I had to establish a new, separate account that was WiFi-only, in order to get on the 'net. Sheesh.
The lesson: never lose your cell phone. It really sucks.
-Andy.
The Mac mini has an external power brick, unlike the iMac G5:
Still, not that big of a deal, considering how small the Mac mini is. It would be great if it used the same power supply as the PowerBook/iBook, but oh well.
-Andy.
I flipped one of the Mac mini's that Apple had display over, and got a picture that I haven't seen anywhere else yet:
The bottom appeared to be a solid chunk of metal, with the Apple logo etched into it. Sweet.
-Andy.
Apple has a wall that runs along the side of their booth, devoted to the iPod shuffle:
Apple has really nailed this product. Again.
-Andy.
The first law of buying a computer is that as soon as you buy it, there will be something {faster, sexier, smaller, cheaper} for purchase (choose your own attribute). Well, I just saw this on Gizmodo. And it certainly looks slicker than the xPC that I just bought. Rats!
-Andy.
After months of dithering, I finally bought the PC-of-my-media-center dreams:
This box is going to eventually house my TV capture card, and run Linux and MythTV, serving all of my personal video recorder needs. The hardware:
I know that it is way more power than I need for a simple PVR, but I want it to be fast when I crunch video down to Mpeg4. I also want to rip DVDs with it. And run Seti@Home or something (since it has to be on all the time anyway). So, I splurged a bit.
For right now, I've got Windows XP on it, because there are a couple of games that I want to play, and I wanted to inaugurate this computer in style by killing the heck out of Kevin in Urban Terror. Also, I don't have time to mess with Linux right now (see the bit about Urban Terror). But when I get back from Germany, it is going to be on.
Once again, the gallery is here.
-Andy.
So, we have been getting a fair amount of comment spam for the last several months. Once I installed Jay Allen's "MT-Blacklist", it has really only been annoying. When I got home from work today, however, I noticed that my machine was thrashing. It was working so hard, that the console was unresponsive. A reboot later, and I was back in control of the thing. Doing some initial investigation, it looked like somebody (or somebodies) was jamming on the comment system for the blogs that are hosted here. I disabled it quickly, so that I could get on with my life.
Later (after dinner & "The Daily Show"), I found that as soon as I re-enabled the "mt-comments.cgi" script, the box was immediately hammered again. I managed to narrow all of the spam traffic down to 4 IP addresses, being served by an ISP called SAVVIS. Looking in DNS, it looks like these IPs are being used by a company called "Marketscore". From their website, it is hard to tell if they are legitimate or not. For the time being, I have firewalled them off, and fired off an e-mail to the abuse department over at SAVVIS. But in 2005, I'm going to have to do two things:
- Come up with a better anti-spam solution for the blogs hosted here.
- Tune my FreeBSD machine -- because getting pounded with HTTP CGI requests shouldn't hork the box to the point that I can't login on the console.
-Andy.
So, on MacNN today, I noticed a blurb about some instructions for compiling the MythTV Frontend on MacOS X. I had a hard time loading the page (I tried all day -- it was posted to some wiki that was overloaded), but finally managed to get a peek late this evening. I found the instructions for compiling it up, but that looked like a bunch of, well, work. Luckily, I also found a pre-compiled binary, and so I was off to the hacking races. I had to do some mysql hacking, and poke some holes in my DMZ firewall, but even after all of that, I was having issues.
It seems like MythTV stores information about the backend servers in the MySQL database. This information includes the IP address of the server. So, my mythfrontend on MacOS X was connecting to the mysql database on my myth box, and then trying to connect to the mythtv server ports (6543 and 6544) on the backend server. Unfortunately, when I configured mythtv, I was thinking only of the single-box case, and so it appears as if the backend server IP address that I configured is 127.0.0.1, not the real IP of the box. This means that mythfrontend running on my PowerBook was trying to connect to 127.0.0.1 in order to watch TV.
I don't really know how to fix this, but it probably involves changing some data in the database. Not something that I want to do on my PVR, while it is recording Badly Drawn Boy on Last Call with Carson Daly. So, what did I do? Why, I whipped up an SSH tunnel of course. But that's not the amazing part -- the amazing part is that it actually worked! I was able to stream an tonight's episode of The Daily Show, through SSH, over my 802.11g wireless, and watch it in realtime on my PowerBook.
This is really awesome. It means that I now have a wireless TV in my apartment (and it didn't cost me an arm and a leg!). Of course, I don't really need such a thing in my current one bedroom apartment -- but I can envision several uses when I'm back living with a roommate again.
-Andy.
So, I am glad that Ranchero Software finally released NetNewsWire 2.0, even if it is only a beta. I bought 1.0.8 about a month after I started blogging, and I was starting to get a little unhappy with it, because the software appeared to be stagnating. But I bought a copy so that I could encourage further development! But, my purchase has paid off, because 2.0 is awesome. It finally supports Atom feeds, which means that I can finally have Chris' blog polled from NetNewsWire. It has a new swanky tabbed interface for viewing HTML articles right in NetNewsWire (which is vastly superior to popping open new Safari windows). Plus, it seems like it is faster at going out and polling for new articles, which is quite welcome.
Those are the new features that have immediately jumped out at me. Well, there is one more thing -- I had numerous beefs with the built in blog editor, but I used it for posting to my blog anyway. In NetNewsWire 2.0, Ranchero has gone ahead and put this feature out of its misery, and removed it from the product. But am I mad?!? Heck no, because they have gone ahead and rolled out a dedicated blogging client, MarsEdit. I've been using it for the last several days, and so far I am pretty happy with it. It is already won me over with how easy it is to paste URLs (much faster than in the old client).
If you have a mac, I definitely recommend checking these two applications out.
-Andy.
After doing some research on the 'net, I found that Fujitsu offers a 5 year warranty on their enterprise SCSI drives, which is pretty amazing. The drive in redefine that is failing was manufactured in March of 2000, so it is 4.5 years old. So, I called up Fujitsu today, to see what I could do about getting my drive repaired under warranty. The first piece of information that the nice Fujitsu woman asked me for was the model number of the drive. I give it to her (noticing that it ended in the letters "DL" as I said it), and upon hearing this, she immediately went into the song and dance -- "Did this come with a Dell computer?". To which I replied that it did, and she of course told me that I had to deal with Dell directly.
It seems that one of the ways in which Dell gets a discount on parts is to negotiate a lesser warranty with the manufacturer. They then turn around to me, the customer, and sell me an entire computer with a 1 year warranty, that I would need to pay to extend, even though if I were to buy the parts myself, individual ones may have longer warranties.
On a lark, I contacted Dell (I saw "on a lark" because I knew that my computer has long since been out of Dell's warranty), and the Dell representative told me that my computer was in fact out of warranty, and that I could look into buying a replacement part from Dell if I wanted.
I'm not really pissed off about any of this, I just find it interesting. It is also another case for building my own computer that I hadn't really considered before.
Another interesting thing that I learned when researching my soon-to-be-completely-dead disk is that while Fujitsu warrants the non-OEM version of the drive for 5 years, they go on to say that it was only designed to last for 5 years. Basically, after 5 years, any additional mileage that you get out of it is due to your own personal good fortune. I find that to be interesting for an enterprise-class device, which can oftentimes be in service for far longer than initially planned. It also makes me suspect of 10,000 RPM (and higher) drives. My gut tells me that the higher rotational speed of the platters hampers drive longevity. The 7,200 RPM IBM drive that I am using now as a backup was manufactured in September of 1998, and has been in continual operation since I have owned it, until June of this year. I think that IBM really knew how to make disks, once upon a time...
All of that being said, I am still running off of the suspect Fujitsu drive. Since fixing the bad sector, it seems to be performing okay. I beat it to hell today upgrading a whole bunch of ports, and I haven't seen any more SCSI errors. I think it is just a matter of time, however...
-Andy.
So, I did a bad sector check in the SCSI BIOS (Adaptec's SCSI chipsets are awesome), and the check found one bad sector on the 9Gb Fujitsu disk, which I told it to remap. The machine seems to be fine now, but bad sectors are indicative of pending drive failure. So, I'm going to have to come up with a long-term solution to this problem. For the time being, I have resurrected my old 4.5Gb IBM U2W SCSI disk, and slapped that in redefine. I've setup a cron that rsync's the relevant bits from the 9Gb disk over to the 4.5Gb, so I can boot off of that in an emergency. But I think that going forward, I need to come up with some sort of RAID solution, so that this machine can drop a disk, and I can wait until the weekend in order to deal with it.
But this caps a "bad computer day" for me. Not only did redefine have some issues, but towards the end of my work day today, a server that I was working on went south. A co-worker was doing a package install at the time, and we suspect that the package had something like "rm -rf $INSTALL_LOC/" in a post-install script. Of course, if the "$INSTALL_LOC" variable is null, then the shell will translate that command to "rm -rf /", which on any UNIX box (and Solaris in particular) is quite a bad thing to do.
sigh
-Andy.
It looks like redefine's (the server that hosts this blog) SCSI disk is failing. This machine has a 9Gb U160 10k RPM SCSI drive as its primary boot and root partitions, and a 160Gb IDE disk serving as /home. I was messing around from work today, trying to update my ports collection, and the machine has been acting strange. A hit from the dmesg command shows a lot of messages like this:
<<<<<<<<<<<<<<<<< Dump Card State Ends >>>>>>>>>>>>>>>>>> (da0:ahc0:0:0:0): SCB 0x6b - timed out sg[0] - Addr 0xb184000 : Length 4096 sg[1] - Addr 0x7f85000 : Length 4096 sg[2] - Addr 0xd546000 : Length 4096 sg[3] - Addr 0xf127000 : Length 4096 (da0:ahc0:0:0:0): Queuing a BDR SCB (da0:ahc0:0:0:0): Bus Device Reset Message Sent ahc0: Timedout SCBs already complete. Interrupts may not be functioning. (da0:ahc0:0:0:0): no longer in timeout, status = 34b ahc0: Bus Device Reset on A:0. 5 SCBs aborted
Dang. Everybody who has data on this box should officially back it up, starting now.
-Andy.
So, Mark noticed on Friday that the spammers have found redefine, and as a result, several of our blogs have been "crapflooded" -- i.e., the comments to our posts were filled with spam. Rushabh suggested that I install MT-Blacklist, which I did today before going into the city. So far, it has been useful in de-spamming my blog (deleting comments in the MT user interface is painful), but the jury is still out. The problem with any sort of blacklist is that you have to keep the definition file updated (which doesn't appear to be easy to automate), you can get false-positives, and the spammers can always be "one step ahead". Ultimately, I think that I may either just disable comments, or move to a forced-registration system, like what Carl is using now.
-Andy.
Every time I go to Chris's house, and see him using his Tivo, I want one. But every time, I find some reason (or 3) why Tivo just isn't right for me. Well, I was just at Chris's place at the end of July (for an entire week), and so he really had me on this whole Tivo thing. And then, last month, Tivo started offering some rebate thingy, which made it even more compelling.
But still, I resisted. I was going to write a long blog post about why I resisted, but instead, I'm going to write about the solution.
I have thought about just building my own Tivo-like device before, but I didn't think the end result would work well enough for me (I am a demanding TV user). But when I was up in Seattle, Fredrik told me that he built himself a MythTV box, and that it was working great. So, he totally sold me on it.
I've spent the last couple of nights surfing up PC hardware, because I don't really having anything suitable for integrating into my entertainment center. This whole project has kindof morphed into me buying a Cube PC, because I have always thought those things are cool. I would have bought one years ago, but I got into the whole Apple thing instead. Unfortunately, to assemble the Cube PC that I want would cost about $1k (when all is said and done). That is a little bit much for me to spend, considering I'm still not 100% sure that this is all going to work.
So, I decided to just go ahead and buy the cornerstone of the PVR, the TV capture card, and see if I could get it all working in my old dual Pentium-III 500Mhz machine. The card of choice amongst the Linux crowd is Hauppauge WinTV PVR-250. I saw over on Gizmodo that Circuit City is selling the thing with some massive rebates, so today I pulled the trigger on that. I had to go up to Hayward in order to pick the darn thing up, which was okay, because I got to throw Mike a bone.
So far, I have managed to install the card on my Windows XP partition, and in less than an hour, have it at a point where I could watch "The Daily Show". Over the long weekend, I will be installing Linux, and seeing if I can produce a workable prototype. I'll try it out for a few weeks, and if it seems like the whole thing is going to work, then I'll buy some sort of entertainment PC. This will also buy me some more time, so I can find the exact PC that I want.
It is gonna be great.
-Andy.
I am playing around with some new software that I downloaded, instead of going to bed (I have a cold -- I really should sleep). The software is called "Photon" by Daikini software. Photon purports to be some sort of application that makes it easy to post pictures from iPhoto onto a blog. The documentation on their website is non-existent, so it took me awhile to figure out how it works. After several test posts, however, it looks like I got it working.
The biggest drawback to this system is that while you can export a single photo from iPhoto to the blog, then you end up with an entry like the one that I just made. There is no text that surrounds the photo, to give it more description. I know that Carl very rarely writes verbiage to accompany his photos, but I'm not Carl. The second major drawback is that I like for the thumbnail of the image to itself be a hyper-link, which takes the reader to the larger version of the image. It doesn't look like Photon supports this method of doing things.
So, I'm not sure if I'm going to pay the $10 or not. I'll have to play around with it more, I guess.
So, about the picture? There really isn't any story -- on my last day in Seattle, I was hanging with Justin and Sarah, and transitioned to hanging with Rushabh, Kristen, and Ted. I didn't have too much time before I head to head to the airport, so I suggested that we check out the University of Washington campus. It happened to really work out, since Kristen is a student there, she was deputized as a tour-guide for our group. I can't remember which building that is, but the picture looked good, so I posted it.
-Andy.
The EECS department at Case (where my e-mail is hosted) has recently added DSPAM to their mail servers, instead of SpamAssassin. The switch has been a little annoying -- it has forced me to figure out how to move messages between different IMAP folders in pine, which while I have figured out, is still too many keystrokes. The reason why messages have to be shifted around is because DSPAM is a learning system, similar to Mail.app's Junk system.
And of course, because DSPAM needs to be trained, it has really sucked at finding SPAM for the last couple of days. I think that it might be getting a bit better, but it is hard to say. I think what might be hurting it is that when I do use Apple's Mail.app, it plucks the SPAM out into its own folder, and thus as a result, DSPAM doesn't get trained. I'm going to have to research how to make these two kids play better together.
-Andy.
When I was out at OSCON two weeks ago, I performed a little experiment. Since I knew that I was going to be using only one computer, my PowerBook, for the entire week, I decided that I would not delete any SPAM. Instead, I would let it all pile up in the "Junk" folder in Mail.app. I have been curious for awhile as to how much SPAM I'm actually getting, but it has been hard for me to track, because I am pretty fanatical about deleting it.
So, from Sunday the 23rd of July through Sunday August 1st, I didn't delete a single SPAM. And the total that I reached? A mere two hundred and seventy-seven messages. On the one hand, that is a lot of e-mail. It occurs to me as I write this that I should also have tracked the total number of e-mails that I received in that week, so as to determine the ratio of signal to noise in my inbox. But, one conclusion that I can reach is that I'm probably getting less SPAM then many other people out there on the 'net.
Oh, and the other conclusion is that SPAM sucks. But everybody knew that already, right?
-Andy.
So, let's say you have a bunch of DivX 5.0-encoded AVI files of a live concert. And you really like these files, and have them playing all of the time -- not so much so that you can watch, but so that you can listen to the music. Well, at that point, it sure would be a lot more convienant if these files were mp3 files, instead of DivX video files.
And let's further suppose that this very situation happened to a certain someone who owns this blog, and that he decided to hack his way out of it. This is what you might do:
ffmpeg -acodec copy -i Denali02-Blackcat-Apr2003.avi 02.mp3 -map 0.1:0
The 'ffmpeg' command is an open source project for recording, encoding, and slicing video and audio files. I had a vague notion of this program (I remembered installing the FreeBSD port as a dependency for something more interesting, like VLC I think). But a little googling brought be back to this program, and the above command line (applied to each of my video files) was exactly what I wanted.
I know that I could have used some program like "Audio Hijack" in order to get the raw audio, but then I would have had to re-compress it into mp3 format, and that would have been too lossy for my tastes. Instead, I wanted to simply demux the video files, stripping off the audio stream and saving it to a separate file. Which is exactly what the "-acode copy" flag did -- it specified that the audio codec to be used in the transcode should be a straight copy. The other bit of magic is the "-map" flag, which performs a one-to-one mapping from a stream in the input file to a stream in the output file. VLC said that the audio stream was stream #1, but according to ffmpeg, it was stream #0.1. Go figure.
But anyway, now I have my mp3 files, and they are in iTunes, and based on the lyrics, I am figuring out which song is which. Awesome!
-Andy.
I managed to slay the twin dragons of Windows XP and productivity today, by getting my new Dell installed with a fresh copy of XP, activated by the key printed on the box. To Michael K.H. Au-Yeung, wherever you are, I take my hat off to you. I couldn't have done it without you.
And while I was at it today, I managed to get pathetically little "real" work done. Even better!
-Andy.
I'm pretty sure that my first Dell, that I bought back in '95, was assembled in the USA. I'm not claiming that all of which the computer was comprised were made in the USA. I'm just saying that final assembly was done in the US -- at least, that's what I remember.
While sticking my head inside the case of the machine that I got today, I noticed that a lot of components were made in China. Components like the case, power supply, cables, motherboard. And I got to thinkin' -- I bet this thing was assembled in China, based almost entirely on Chinese-built parts.
And then I thought about how times had changed.
And then I got one last whiff of the "new computer smell", and got to playing with my new toy.
-Andy.
So, I got my new machine at work today.
Let me just take a moment to digress -- finally! I have been waiting to get a new machine for like, ever. The other two people on my team got new machines last year (like around Octoberish) -- but did I get any love? Not by a long shot. But now I am in possession of a 3Ghz P4 with a cool Gig of ram (up from a PIII 866 that I just got up to 512Mb like, 6 weeks ago). Man, am I ever excited.
It's too bad that Windows XP decided to give me the old "once again".
The problem, in a nutshell, is that this Dell came from the factory with the stock corporate image on it. But for various reasons, I got it into my head that I didn't want said image on my desktop. So, I figured that I could just use my Windows XP CD to perform a fresh install, but use the Product Key that is on my new Dell in order to perform all of the activation procedures that appear to be a "necessary evil" these days.
But of course, Microsoft is on to me. They saw me coming from a country mile. It appears as if Microsoft is mastering several different "Windows XP Professional" CD images, with specific differences between OEM and Retail. So a Retail XP CD won't accept Product Keys that are for the OEM version, and vice-versa.
Luckily for me, I didn't get onto BitTorrent today, and I most-certainly did NOT download a Windows XP "8-in-1" ISO image. Which is all a very good thing, because it means that I won't be wasting my entire day fighting this battle tomorrow...
-Andy.
I ordered an iSight from Amazon last week Friday. My dad ordered one from the same company, on the same day.
He got his today. Amazon is projecting that I'll get mine by the middle of next week.
Bitches.
-Andy.
So, it was another really long day at work today. I spent the vast majority of it bashing my head against a problem that one of our NT SA's was having. Without getting mired in the boring details -- he was trying to image a server using ghost, and dump the resulting Gigs 'n Gigs of data onto one of our Samba servers.
Everything with his boot disk seemed fine, but when ghost got started, it died right away saying "not enough space on device for image headers", or some such crap. I checked to make sure I could create a file on the shared drive, and that the drive had plenty of space (check and check).
So, I thought that maybe it was a problem with some sort of file size limit, or something. I set out to find a copy of dd for DOS (so that I could run dd if=/dev/zero of=some_large_file.junk bs=1024 count=1048576). Basically, I wanted to see if I could write out a gig+ file in one crack. Of course, I couldn't find anything that ran in plain old DOS.
So, I set out to write a batch file that did much the same thing.
Much remembering, cursing, fighting, and debugging later, I finally had a script that reasonably approached what I wanted. I took it down to the server room, mentally preparing myself for a long wait as the computer wrote zero's to my test file. However, I was surprised when my batch file started printing an "out of disk space" error right after I started it. How big of a file did it write before the disk space errors started?
2,857 bytes.
Yes, that is it. A little more than 2Kb. Cripes. Did the shared volume have well over 2Kb free? Oh hell yes it did.
To make a long story even longer, after much debugging of boot disks, samba (even debug level 10 was now help), and voodoo later, I still don't know what's wrong. I came up with a work-around for NT guy (solution: use an NT box as the server), but I still don't know what's up with Samba.
Fast forward to this evening/morning, after all of the cards (and there were a lot of cards played) have finished. Kevin and I took on round 2 of his mission to be able to get at his code from the VPN. The sysadmins where he works have wisely configured the Samba server that he needs to disallow IP addresses associated with the VPN. Why? Because they want Kevin to learn about SSH tunneling and such so that he'll devise a work-around.
We beat up on windows enough that I got to the point where I was tunneling NetBIOS-in-TCP/IP-in-SSH in my test environment. But, once again -- I noticed some more strange Samba behavior. When I tried to connect through the tunnel, anything that allows a connection from guest works, but shares that require my username/password don't work! Argh!
I give up. Somehow, I just know that this is all Microsoft's fault...
-Andy.
That in Safari, when working in a form "textarea" HTML element, switching tabs resets the cursor back to the top of the textarea?
That is very annoying! Especially when one is trying to edit their template file in Movable Type...
-Andy.
I setup Movable Type on my machine just to mess around. But, my friends started asking me for accounts (and then their friends), and this whole thing is starting to take off. Mark has an article up that has like over 5 comments! Wow!
Does this mean that I have to be a responsible admin-type-guy now?
-Andy.
So (can I over-use that word any more?), without getting into too much detail, yesterday was spent trying to get some friends setup with Dreamweaver MX, so that they could publish content to Mike and Kevin's Linux box. This particular Linux box happens to be running Debian, thanks to the influence of "a certain Guju"...
Normally, I can deal with Linux in just about any flavor, but Debian is different enough to be giving me fits. After wasting a good chunk of yesterday fighting various Dreamweaver/SSH/ftp/NAT/tunneling issues, I decided that I would like to leave all of that in the dust by configuring the Linux box to act as a VPN server for the Windows VPN (PPP over L2TP) client.
Sounds simple enough, right?
Well, it took all day today, but it finally looks like I will have a kernel that both:
- Has the pre-requesite FreeS/WAN support in the kernel, and
- compiles to completion, without error.
Ug. So this means that I spent a lot of time fighting to get all of the requisite sources and packages on the box. And then fought trying to understand Debian's unique way of compiling the kernel (make-kpkg). And then, watched the compile fail in the FreeS/WAN "ipsec_init.c" code.
Much, much, much use of Google later, I decided to apply a patch to the "freeswan-kernel-patch" (patching the patch -- that is great). One hunk from that patch failed to apply, so I applied it by hand. Now things appear to be working -- of course, I say 'appear" because the kernel is still compiling (it has been at least 2.5 hours as of this writing). Granted, this box is a single-processor 400Mhz Celeron. But come on, my FreeBSD box has a comparable processor, and it takes about this long to do a whole "make buildworld"! I suspect that I didn't eliminate enough crap when I configured the kernel...
And after all of this, is IPSec going to work? Hell no, I still have to configure it, and fight through broken l2tpd daemons, and whatever all else isn't going to work right "out of the box".
And people wonder why Windows Server is gaining market share...
I suppose that I should mention, the specific things giving me trouble on Debian is the whole apt/dpkg thing. For example, I had no idea how to figure out which packages were installed on the box (the "rpm -qa" equivalent). Nor could I figure out how to determine which packages were even available for me to install. For example, "make menuconfig" failed in the kernel sources, because for some reason, this box didn't have ncurses. Well finally, a certain Guju supplied the command "apt-cache search <str>, which can be used to display a list of installable packages, with the names that "apt-get install" will understand. I'm still not sure how to print a list of packages already installed on the machine...
-Andy.So, Mark made a comment on one of my posts, and MT e-mailed that to me today. So, it is starting to show some sparks of life. Rushabh mentined that I probably have a permissions problem on my /var/log/httpd-cgi.log (which should capture STDERR from cgi scripts), so I'm posting to see if I get any messages now.
-Andy.
And this e-mail probably really sucks.
-Andy.
...that is never good, right?
-Andy.
I've set the blasted thing to "debug", and it is supposed to log to stderr now. Where is that going to go? I'm hoping to the apache error log...
-Andy.
Now I'm playing with NetNewsWire, a MacOS X RSS/Weblog app. It's pretty schmansy.
-Andy.
It's a beginning CSS designer's nightmare and a frequently asked question at ALA: Multi-column CSS layouts can run into trouble when one of the columns stops short of its intended length. Here's a simple solution. [A List Apart]
Rushabh is a very needy guy, and I should have gone to bed over 33 minutes ago. Blah.
At least I got out of work today just in time to celebrate Kevin's birthday. Go me. It's not like I worked more than 10 hours today... and it's not like Kevin turns 25 once in his life...
Did somebody run into a crowded room and shout "testing!"?