A couple of years ago, we picked up a heavy duty Xerox cross cut shredder. It was supposed to be extra heavy duty with all metal blades, blah, blah, blah.

Recently, the blades stopped moving. The motor was clearly chugging away, but the blades just wouldn’t move at all.

I disassembled the unit thinking that I would be able to fix it. Instead, I discovered a remarkably stupid design decision.

The motor was clearly a fairly heavy duty unit with a nice little cooling fan on one end. The other end ended in a metal worm gear. The gear mechanism between the worm gear on the motor and the blades was made entirely of plastic. not only that, but the teeth on the worm gear and the first gear in the plastic gear train didn’t even match in size or angle.

The entire gear showed signs of wear, but the spot where the gear had stopped rotating was completely chewed to hell. It looked more like someone took a Dremel to the plastic and chewed a big chunk out of it.

Metal to plastic in a high torque environment? Stupid, stupid, stupid.

Not that I’m surprised. In general, consumer robotics suck. The art of manufacturing consumer devices is focused on minimizing the cost of manufacturing by using cheap components and using designs that cut tolerances as tightly as possible.

Not such a bad combination with electronics in that the inputs are controlled and there are no moving parts. With mechanical devices, the wear and tear of moving parts means that the tight tolerances and overall focus on cheapness yields a fragile product that can easily break when faced with the chaos of real world use.

When I’m buying any random piece of equipment, I always consider it from a complexity standpoint. The less complex, the fewer things can go wrong and the fewer cheap parts to fail. When CD changers first hit the streets, I picked up a 5 disc carousel unit. Simple design, few moving parts. Many people I knew picked up 6 disc cartridge style changers at about the same time. The carousel player lasted over 7 years of heavy [ab]use while most of the six disc changers failed within the first few years.

Update: We decided to drop a whopping $35 on a Michael Graves Cross-Cut Paper Shredder (Target). Of course, I didn’t look at the reviews before buying it. This may be another adventure in stupidity. It was really the built in pencil sharpener that sold us.

50 cents per gigabyte seems to be the price of hard drives these days. In all cases, the only way to achieve 50 cents/gigabyte is to take advantage of rebates, so it isn’t the “I want to build a data library and need 2,000 200GB drives” price. DealMac has several deals listed and Fry’s has stacks of hard drives, several of which break down to 50 cents/gigabyte.

One year ago, the price was at least $1/GB and that was relatively slow drives. Now, 50 cents/GB buys you an 8MB cache on a 7,200 RPM drive with an 8 – 9 ms average seek time.

But the warranty period has been vastly reduced. Three to five years ago, drive warranties were typically 5 years. Then it dropped to 3. Now it is a 1 year warranty, but– at least with Western Digital– you can buy an additional 2 years for $15. Not sure I like that trend. Nope. Sure. Don’t like it.

I picked up a 160GB Western Digital to be used as the backing store for my encoded CD collection. The CDs will effectively act as a backup and I use the “Purchased Music” group in BackUp.app to back up the songs purchased from the music store.

I’m disappointed by two particular details of the current hard drive market. First, the only effective means of backing up such a large volume is through the use of RAID. But since drives are only sold at the discounted price by taking advantage of a one-per-household rebate, the second device is expensive (Actually, this turns out not to be the case for the WD drive; WD will honor up to two rebates per household!). Secondly, no one seems to have built a decent 1394 case with two drive bays that isn’t also significantly more expensive than a single device case.

How far we have come…

In 1987, I paid $1,400 for an 80 megabyte hard drive for my Mac Plus. Now, $1400 would buy nearly a terabyte without rebates (300 GB drives for $400) and 80MB is less space than the amount of RAM shipped in a typical system.

… an infinite # of miles to go.

I actually got off my ass and started riding my mountain bike this week. 1 mile on Tuesday, 2 miles on Thursday, 3.25 today. I used to ride all the time and was actually in really good shape. Somewhere along the line, my shape turned to “pear” and walking up a couple of flights of stairs became challenging exercise.

Pathetic. Hopefully not for much longer. I’m purposefully easing back into riding to avoid damaging myself severely due to my rather fragile current condition.

It isn’t without its geek centered aspects. I’m using a Magellan GPS to track distance, speed and to ensure that I get home in time to do whatever is next in the morning schedule. I would like to be able to download the tracks to my Mac. I figure I’ll solve the problem with the daily trivial rides such that when I’m actually doing interesting rides again, I don’t have to face the stupidity then.

Dos Compadres Taqueria is located at 100 W. Hamilton (near Winchester) in Cambell, CA. Excellent mexican food, very cheap.

I had Camarones Rellenos, thinking “Shrimp. Healthy Seafood. Not too heavy.”.

Ha! How wrong could I be?

Turns out that Dos Compadres’ Camarones Rellenos is Shrimp wrapped in bacon and deep fried until golden brown. Quite spectacular, actually. I would recommend it to anyone.

I found this recipe: With cheese (sounds good, but the cheese isn’t necessary though it does add a slight bit of extra heart stopping power). Honestly, you don’t need a recipe. Just wrap bacon around shrimp and fry with an extremely light batter.

Unix has long had an EDITOR environment variable whose value would be used by many programs whenever the user was expected to edit a hunk of text. For example, CVS or Subversion use the program specified by EDITOR to edit commit messages.

bzero, my new weblog rendering/publishing tool, uses EDITOR to edit weblog posts.

Given our modern desktop environments, it seems kind of silly to limit EDITOR to just command line tools. It would be nice to be able to use TextEdit, Xcode, or BBEdit to edit files normally edited from EDITOR. Emacs or vi are perfectly capable editors, but they simply don’t have the level of integration with the system that a GUI level editor has (yes, there are .app implementations of Emacs, but it is still not exactly “user friendly” in the classic Mac OS application sense).

While open can be used to easily open any random file in any random GUI application, it doesn’t work in the context of EDITOR. When something like bzero, cvs, or subversion invokes the EDITOR, it expects the command to not exit until the editing is completed. open exits immediately after it has successfully passed the file off to the GUI app, long before editing has even started.

BBEdit provides a command line tool that does exactly that. With the –wait option, the command line tool will not exit until the associated editing window is closed in within the BBEdit application. Exactly what is needed.

If you happen to have BBEdit installed, you can install the command line tool by going to the “Tools” subsection of Preferences and click the Install “bbedit” tool.

It will install both the bbedit tool and a very nice man page.

Once installed, you can set EDITOR to bbedit –wait –resume and most things will work correctly. However, there are a handful of apps that don’t like it when EDITOR is set to something other than just an executable. As such, you’ll probably want to set EDITOR to point to a little shell script that invokes bbedit with the appropriate arguments. I created as script called bbeditwait with the following contents and shoved it in the bin directory within my home account:

#!/bin/sh
bbedit --resume --wait $*

Then set EDITOR to ~/bin/bbeditwait. Unlike the suggested script found on the BBEdit site, this one will allow you to edit multiple files. There is a bug, though, in that it will literally edit each file one after the other, opening the next after the previous has been closed. It should just open all files at once and exit from bbedit when all requested files have been closed.

Update:

Ben Hines mentioned that this would make a good Mac OS X Hints submission. There is already such an entry in OS X Hints, but it was rather generic and the suggested shell script in the comments doesn’t quite work the way I wanted in that it only accepts single files.

This isn’t new. Under OpenStep, I patched TextEdit such that an accompanying command line tool could edit files in the same fashion as ‘bbedit –wait’. It also allowed one to edit files remotely. So, if I were logged into a Solaris or Linux dev box from my OS X Server / OpenStep machine, EDITOR based sessions would pop up on my local machine as a regular TextEdit document.

I probably still have the code around somewhere. I should probably dig it up and post it.

So, BoingBoing and others are already starting to criticize TiVO’s new TiVo to Go service because it uses a proprietary DRM technology to lock content to a particular computer/TiVo combination (or something like that), thereby eliminating the ability for the user to edit or archive the content. It isn’t really clear that archival is disabled, but flexible archival is certainly limited.

In any case, Cory says Not delivering the products your customers demand is not good business. It never has been. as a way of criticizing what TiVO is delivering as not being what the customer wants.

Cory seems to have missed the point. As a long time TiVo user, I can tell you exactly what I and most other TiVo users want.

I want to know that my TiVo will have the accurate scheduling information required to resolve scheduling conflicts such that the TiVo will record the content I desire without me having to constantly double check the TiVo’s schedule of recording events.

To do that requires accurate scheduling information delivered in a timely fashion (i.e. ahead of lineup changes) by the various content distribution agents. Given the vast and ever increasing number of cable providers (San Jose has at least 6, maybe 8, different cable lineups depending on your location) and the control with which the local cable company can now exert over the broadcast schedule, having access to accurate information in a timely fashion is paramount to the TiVo unit’s ability to record the rights shows at the right time.

Being able to play a show on my computer is a “nice to have”. Being able to edit and share that content with others would be even nicer.

But if editing and sharing the content threatens the ability for the TiVo to fulfill its primary function, then what would be the point?

Cory asks Where does this bizarre idea — that the dinosaur industry that’s being displaced gets to dictate terms to the mammals who are succeeding it — come from?

It seems abundantly clear that this “bizarre idea” comes directly from the fact that the newcomers are dependent upon the dinosaurs not deciding to crush them. The newcomers are dependent upon cooperation from the dinosaurs or else the newcomers will not survive.

The dinosaurs are not only TiVo’s eventual competition, but also TiVo’s only means of achieving success.

Historically, the original ISPs were in a very similar situation and were pretty much completely crushed when the telcos decided there was money to be me made. The telcos could undercut the ISPs by simply eliminating the cost of two local loops (you -> ISP, ISP -> [telco] -> Internet vs. you -> [telco] -> Internet). TiVo is in a similar situation in that the cable companies could decide to roll out their own PVR with a claim that their PVR’s scheduled recordings are much more accurate/reliable because they are directly integrated with the scheduling stupidity at the head end.

So, TiVo walks a fine line and has been doing so very well. Witness ReplayTV (who? Gone.). They threw the 30 second skip and content distribution in the face of the broadcasters and the end result was the complete lack of strategic partnerships and the eventual death [twice] of the platform.

Radio has effectively handed me a blank slate. It is as if I’m creating a weblog for the first time only this time I have 2+ years (**double take**) of weblogging experience at the start.

I could either batch move all the old “content” into the new client environment. To Radio’s credit, it makes doing so very easy.

The alternative is to cull through all the gunk posted over the last few years and use this as an opportunity to update a bunch of content.

You might notice a distinct lack of content here.

That is because my Radio UserLand installation exploded one time too many. I’m going through the pain of moving to something new.

At the moment, I’m using bzero. However, there is a bug that prevents it from working when bzero itself is installed on a different volume than your home account. I had to drop in an ugly little hacque in place to redirect the location of bzero’s per-user configuration

In looking for a replacement, one key feature is to continue to be PyCS compatible in that it must implement the xmlStorageSystem protocol.

I’m quite comfortable with continued hosting at PyCS. Phillip Pearson has been extremely generous and helpful– I’ll take this opportunity to publicly thank him for his efforts!

An old friend from our mutual NeXT days contacted me. He now works at Sleepycat and gave me a pointer to Syncato.

One of the reasons why I write a weblog is so that there is a google-indexed record of some thoughts/notes/tricks that I have had/taken/discovered over the years. It is more efficient and effective to find information via Google than it is to grep through a bunch of text files on my own system.

Through XSL and XQuery, Syncato offers a compelling set of features that might effectively address the need to find information and certainly offers a wide variety of interesting ways to cross cut the data.

With all that said, it is hard to beat the simplicity of bzero and bzero’s use of Python source as templates is particularly interesting.

In the comments, PyDS has been mentioned as a possible candidate. PyDS is a really cool collection of technologies, but it does not fulfill the desire to have a simple weblogging client that is compatible with Python/Radio Community Servers.

PyDS and Radio share a particular trait that is at the heart of a pet peeve of mine. In particular, why do I need a desktop web server to publish a weblog? Both tools are extremely complex, though in different ways. Radio is a closed, proprietary, pile of antiquated complexity that no longer behaves well on the modern Macintosh desktop. PyDS is an open, well behaved, desktop server infrastructure with a boatload of dependencies on other open source projects. If it were a choice between the two, I would be posting from PyDS right now.

But both platforms are eliminated by the complexity factor. In their own right and for different reasons, each are fragile, hard to manage, and complete overkill for solving the relatively basic problem of publishing a WebLog.

The Syncato solution mentioned above is also relatively complex and hard to install (compared to something like bzero or your basic Blogger client [NetNewsWire again]), but it offers a lot of direct benefit for that complexity in the form of a powerful/flexible database containing your content. In other words, the complexity is focused on solving one problem and not solving a bunch of different problems in a loosely connected desktop server. I’m just not sure I care — I’m not sure that Syncato offers enough value over the knowledge that Google is going to index what I publish sufficiently for my needs. Maybe if I wanted to have a personal log of private notes. For that,I use Note Taker and SBook.

Big Nerd Ranch was kind enough to send me a copy of their new book Core Mac OS X and Unix Programming in gratitude for editing the original Cocoa Programming for Mac OS X book.

The book is effectively the course materials for the Big Nerd Ranch course of the same name.

Co-authored by Mark Dalrymple and Aaron Hillegass, the book covers many advanced topics in the same clear/concise manner as Aaron’s original Cocoa programming book.

The book is focused on relatively advanced topics such as memory management, the run loop, multiple processes, rendezvous and many other topics. Instead of focusing on the high level details, each topic is investigated from the perspective of documenting the technical details– often non-obvious– through the use of examples, diagrams and discussions.

Two bits that immediately grabbed my attention–and are examples of the approach found in the rest of the book– where the discussion of malloc() behavior and the discussion of the run loop.

In the case of malloc(), the book precisely discusses the fact that malloc() does not necessarily return the number of bytes that you think it does — it often returns more based on certain internal optimizations. The end result is that quite a number of memory size reduction related optimizations are a complete waste of time! Instead, you might be able to consume a few extra bytes by not compressing the data structures and gain raw performance in terms of the # of CPU cycles needed to access the data.

The discussion of the run loop was also of great interest. It clearly diagrams the run loop and the exact point during the run loop — both CFRunLoop and NSRunLoop — that the various events and timers will be handled.

Excellent, excellent stuff.

The $100 price tag is a bit higher than most developer books, but the content is of an advanced and highly refined nature. $100 is a heck of a lot cheaper than $3,500 — the entrance fee to the class for which the book was written.

Big Nerd Ranch has also made a set of resources and forums available online.