tag:blogger.com,1999:blog-88036528956877471142024-02-06T19:07:31.066-08:00cat /dev/randomBrian GoetzBrianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.comBlogger56125tag:blogger.com,1999:blog-8803652895687747114.post-15059473783671396732014-06-16T11:17:00.002-07:002014-06-16T11:17:28.740-07:00Solar -- one year inHere's an update on our Solar installation, after a full year of data. <br />
<br />
The system was initially sized with the intention of completely canceling out our electric bill. Of course, there are a number of factors that can cause actual generation in any given year to vary from projected generation, of which variations in weather is probably the largest. <br />
<br />
Overall, we came pretty close to hitting our target; our net electric bill for the year was $100, only about 5% of our actual electrical usage. I'll call that a success.<br />
<br />
I knew that we'd get a lot more power generated in the summer than the winter -- longer days, better sun angle (power generation is proportional to the cosine of angle of incidence), clearer skies, and no snow on the roof. But I was surprised by just how big the swing was -- almost a 10x difference between generation in June and generation in December. <br />
<br />
Below is a graph of cumulative power generation by month over the past twelve months; the Y axis is kWH. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgw55fVPrq5iTDtc9Th_CC_39b3BnLcyEZnQxrN3qJiZM1KgnbMUOcia29bpMxd70LXARBUpv53M0ACw1SKWDk1la8mx1DxXQYvmzL7Yqx59UF68-qDXX0OUkNsUAJFQNz2VM0lsI8Gd4/s1600/2014-06-16_14-15-55.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgw55fVPrq5iTDtc9Th_CC_39b3BnLcyEZnQxrN3qJiZM1KgnbMUOcia29bpMxd70LXARBUpv53M0ACw1SKWDk1la8mx1DxXQYvmzL7Yqx59UF68-qDXX0OUkNsUAJFQNz2VM0lsI8Gd4/s1600/2014-06-16_14-15-55.png" height="569" width="640" /></a></div>
<br />Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com0tag:blogger.com,1999:blog-8803652895687747114.post-54479049021972652612013-05-11T19:35:00.005-07:002013-05-11T19:35:52.583-07:00Solar -- one month inAbout one month ago we installed a solar array at the house, with 24 325W panels, for a peak generation capacity of 7800W. So far, we're happy with it. A sunny day in May seems to generate about 50-55 kWh. <br />
<br />
We chose to lease rather than buy. A number of companies now offer leases on solar power systems, usually for a 20 year period, where they own and maintain the system, act as general contractor for procurement and installation, and handle the tax/permit/utility paperwork. Leasing was cost-competitive with buying -- and far less hassle and paperwork. The leasing company offered a choice of a monthly fee or a one-time payment. The one-time payment is a far better deal; the monthly payment basically represents financing at 7%. The install was completely hassle-free. The system uploads data to a monitoring site, so you can get graphs and reports of your production. <br />
<br />
In Vermont, there are three sources of subsidy for solar: a 30% federal tax credit, a state tax credit based on generating capacity (which came out to about another 10%), and utilities buy your generated power at a premium. (Some utilities will only give you a credit; ours (Green Mountain Power) will cut a check for any credit balance at year end.) With a lease, the leasing company gets the tax rebates (which reduces the cost of the system) and the homeowner keeps any payments for the power generated, and the leasing company handled all the tax paperwork and associated risk. <br />
<br />
I was apprehensive about whether to believe the projected generation capacity; with a month of data, I am gaining some confidence that they were reasonable. Based on these projections (and assuming that power rates stay the same), the system should offer a ten-year payback and a 8% return on investment. In hindsight, I would have gone with a slightly bigger system (there's still plenty of room on the roof); the standard approach seems to be to size the system to net out your power bill to zero, but this seems more of a psychological than financial target. <br />
<br />
The key risk items are:<br />
<br />
<ul>
<li>Generation. The system may generate less than projected, though the first month looks promising. (To meet their targets, I need to average generating 26 kWh/day through the year. In May, we averaged 38 kWh/day; I would expect to generate even more in July/Aug and much less in Jan/Feb, but it is believable that we will hit this average.) Even if the projections are accurate, we are of course still dependent on weather.</li>
<li>Changes in utility policy. Green Mountain Power offers an effective 7c/kWh subsidy on top of the regular tariff for any power we generate. However, the company could change this policy, and probably will sometime in the next twenty years. </li>
<li>Change in power rates. If power rates go up, the return is better; if power rates go down, the return is worse. I have to assume over 20 years electric rates will go up. </li>
</ul>
<div>
Below is a chart of daily production in the 30 days since install. </div>
<div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKZ1-szyG_M2mxn1wfUv4gCKqll8G5wFP2V_RtFIPDVr7apeb0jC5euMZ2eGCwMYbzZmxwDxZHqEPl4Nm7jSdh1QlG27PLuZRkZPBZ51EQiL-xyvcW14lK6G47gb37pVbdLys08wpxapM/s1600/5-11-2013+10-24-22+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="170" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKZ1-szyG_M2mxn1wfUv4gCKqll8G5wFP2V_RtFIPDVr7apeb0jC5euMZ2eGCwMYbzZmxwDxZHqEPl4Nm7jSdh1QlG27PLuZRkZPBZ51EQiL-xyvcW14lK6G47gb37pVbdLys08wpxapM/s640/5-11-2013+10-24-22+PM.png" width="640" /></a></div>
<div>
<br /></div>
</div>
</div>
<div>
Overall, while there's some risk, it seems that the system cost (with the current level of subsidy) has come down to the point where it offers a positive financial payback for homeowners in addition to the ecological benefits. Plus, its fun to watch the meter spin backwards! </div>
<div>
<br /></div>
Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com1tag:blogger.com,1999:blog-8803652895687747114.post-17856444422333970842010-06-07T15:32:00.001-07:002013-02-01T06:20:51.185-08:00Exception transparency in JavaRecently posted on my Oracle blog: <a href="https://blogs.oracle.com/briangoetz/entry/exception_transparency_in_java">http://blogs.oracle.com/briangoetz/entry/exception_transparency_in_java</a>Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com1tag:blogger.com,1999:blog-8803652895687747114.post-6167935519562633452010-05-20T14:01:00.000-07:002010-05-20T14:01:11.739-07:00Memtest86+ rules!About two months ago I upgraded my system from Windows XP to Windows 7 64-bit, and at the same time from 4G to 8G RAM. As always happens, I was amazed how much faster a new Windows installation was than the old one on the same hardware -- it is insidious how "Windows Decay" chips away at performance. <br />
<br />
About a week ago, the system started behaving badly -- IE crashing, Thunderbird crashing, and starting yesterday, the whole thing blue-screening. After wasting a lot of time trying to figure out "what software was updated recently", I started to suspect memory errors. So I ran the Windows memory test program that shows up on the boot screen -- nothing. <br />
<br />
After more dorking around, I downloaded and ran MemTest86+ (www.memtest.org), burned it to a USB drive, and ran it. It immediately found thousands of memory errors; by trying various combinations and moving modules from slot to slot, I was able to identify the bad modules. I had bought Crucial's top of the line (Ballistix Tracer LED) from Newegg; the Crucial folks immediately shipped out a replacement. <br />
<br />
Given how many errors MemTest found, its amazing that the Windows test found nothing. <br />
<br />
Thumbs up for MemTest86+ and Crucial customer service. Thumbs down for Windows Memory Test. Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com5tag:blogger.com,1999:blog-8803652895687747114.post-91703060300034345642010-05-18T09:52:00.000-07:002010-05-18T09:52:59.734-07:00Registration is open for the 2010 JVM Language SummitWe've just opened registration for the 3rd annual JVM Language Summit, to be held at Oracle's facility in Santa Clara CA on July 26-28. See <a href="http://www.jvmlangsummit.com/">http://www.jvmlangsummit.com/</a> for details. Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com0tag:blogger.com,1999:blog-8803652895687747114.post-17536400029710234822010-01-31T06:59:00.000-08:002010-05-15T19:18:55.569-07:00Book review: Fermat's Enigma<iframe src="http://rcm.amazon.com/e/cm?lt1=_blank&bc1=000000&IS2=1&bg1=FFFFFF&fc1=000000&lc1=0000FF&t=none0b69&o=1&p=8&l=as1&m=amazon&f=ifr&md=10FE9736YVPPT7A0FBG2&asins=0385493622" style="width:120px;height:240px;" scrolling="no" marginwidth="0" marginheight="0" frameborder="0"></iframe><br/><p>This is a nice little book about the history of mathematics and the 350-year quest for the proof to Fermat's Last Theorem. It was written by the fellow who wrote the BBC / Nova TV special on Andrew Wiles, but includes a lot more information than a one-hour show could. It does a nice job at hitting many of the high points of mathematical development from Pythagoras to modern day, including the "discovery" of zero, then negative numbers, then imaginary numbers, techniques for grappling with infinity, Turing-computability, and Godel's incompleteness theorem. It doesn't attack any of these in great depth, but it does provide a nice historical perspective while remaining about as accurate as a lay book can do. It also does a nice job of illustrating the near-hubris required for Wiles to lock himself in a closet for eight years in order to solve a problem that had eluded mathematicians for centuries. Mathematicians will enjoy the panorama; non-mathematicians will likely find the introduction to some of these obscure concepts accessible and enjoyable. Also by this author: <a href="http://www.amazon.com/gp/product/0385495323?ie=UTF8&tag=none0b69&linkCode=as2&camp=1789&creative=390957&creativeASIN=0385495323">The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography</a><img style="border:none !important; margin:0px !important;" src="http://www.assoc-amazon.com/e/ir?t=none0b69&l=as2&o=1&a=0385495323" border="0" alt="" width="1" height="1" /> .</p><p>(Recommended to me by: Stuart Marks.)</p>Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com3tag:blogger.com,1999:blog-8803652895687747114.post-37063209974670301732010-01-31T06:36:00.000-08:002010-05-15T19:18:55.571-07:00Our government, protecting us<p>We've recently gone on an "energy efficiency" rampage at the house, replacing bulbs with CFLs, identifying devices that are unnecessarily left on all the time, wrestling with Windows to stay asleep during periods of inactivity, etc. We also recently just installed a "continuous" or "on demand" hot water heater, replacing the 50G direct-vent tank heater we had (it was getting to the end of its lifetime and it was easier to replace it preemptively.) </p><p>Unfortunately, the state requires all newly install water heaters to have a thermostatic mixing valve that limits the water temperature to 120 degrees. (For tank systems, it is recommended to keep the tank water at 140, to prevent the bacteria that causes Legionnaire's disease, but 140 is hot enough to scald. But continuous systems have a control system for the output temperature, so can be safely kept at whatever temperature you program in.) And its probably not even working right, since the output temperature is even less than 120. The valve adds cost to the system and to the installation (probably a dozen additional welds in addition to the valve), and while we now have an infinite supply of hot water, generated more efficiently, its not as hot as we like it. </p><p>Reputable plumbers are not able to remove or bypass the valve, which means we need to either find a disreputable plumber or I need to do it myself (read: find an incompetent plumber.) </p><p>Note to lawmakers: in my many years of successful shower use, I've learned a secret trick to avoid getting scalded: put your hand under the water first -- if its too hot, turn down the water temperature before getting in!</p><p>Thanks, elected officials, for making my house systems both more expensive and less useful. </p>Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com3tag:blogger.com,1999:blog-8803652895687747114.post-63505203417234617532010-01-23T14:23:00.000-08:002010-05-15T19:22:32.355-07:00e-mail packrat<p>I've long tried to keep all the e-mail I've ever sent or received; I've got an archive going back to 1985 or so, when I first realized that keeping e-mail might be a good idea. Trouble is, keeping such an archive in one place requires a fair amount of maintenance, because formats and protocols change. Is it worth the effort? </p><ul><li>Until 1987, I primarily used a VMS system. </li><li>In 1987, I switched to a Unix machine at MIT. I was able to import my old VMS mail into whatever the mailbox format of the day was (mbox, probably) by a script I found somewhere. </li><li>In 1992, I switched to using POP through the client program Eudora. Eudora stored the mail locally, in a folder format that was something like 'mbox', but not exactly. (For example, attachments were not stored inlined, but instead in external files.) I managed to import my old Unix mbox files into folders. </li><li>In 2004, I switched from POP to IMAP. I went through an extensive process to convert my existing mail base into real mbox files that my IMAP server could read. I spent several days writing scripts to convert the Eudora pseudo-mbox files to something imapd could handle.</li><li>In 2006, I left Quiotix, and switched my primary mail over to Tuffmail. I took my mail archive (in mbox file format) and put it up on my server machine, and serve that up with imapd. So I now have my mail split between two servers, but Thunderbird can deal with multiple servers just fine. I tried to move the archived mail to Tuffmail as well with several different tools (imapsync, offlineimap, Thunderbird bulk-copy) but I could never get a clean copy -- I suspect that the combination of crappy old multiply-converted mbox files and the old UW-IMAPD server is to blame. </li></ul><p>Right now its still fragmented across a number of formats and servers. Yuck.</p>Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com2tag:blogger.com,1999:blog-8803652895687747114.post-57584720557973442712010-01-09T04:17:00.000-08:002010-05-15T19:18:55.575-07:00zTunes released<p>As promised in yesterday's entry, I'm releasing my digital media management software (ztunes) to the world. It's hosted at github: <a href="http://github.com/briangoetz/ztunes">http://github.com/briangoetz/ztunes</a>. It is written in Ruby and based on "rake", the Ruby equivalent of "make". It currently has a long way to go but already does a lot. </p><p>You can download the Ruby gem here: <a href="http://github.com/briangoetz/ztunes/downloads">http://github.com/briangoetz/ztunes/downloads</a>. (It is not currently in any sort of gem repository.) It defines its gem dependencies, but you'll also need the Unix tools ffmpeg, flac, and lame. It will run on Linux and Mac but currently has some trouble on Windows since it is dependent on symbolic links for some of its functionality, which Windows doesn't support.</p><p>My motivation for writing this was that iTunes is really inadequate for managing a media library unless (a) you only want to play on iPod (or other Apple) devices and (b) you are willing to let iTunes be in control of ripping and encoding. This didn't work for us for two reasons: we have Squeezeboxes on all the stereos, and I want to rip my CDs to a non-proprietary, lossless format (that means flac, which iTunew doesn't support.) We also have music that has been aquired in various other forms (MP3s from Amazon, AAC from iTunes, WMA from Rhapsody) and want to be able to play all the music on all the devices, without transcoding it all down to a least-common-denominator. (In other words, if Squeezebox supports WMA but iPod doesn't, let Squeezebox play off the original WMA but let iPod play the transcoded version.) And this should be transparent to the rest of the family.</p><p>There are several basic tasks in managing the media library that zTunes automates:</p><ul><li>Content ingestion. I've got a "drop" folder, into which I want to drop the originals of my media, in whatever form, and have them be analyzed, metadata extracted, and filed into a unified library based on its metadata. My metaphor here is the gas tank of an M1 tank: you can pour anything combustible (gasoline, jet fuel, diesel, used cooking oil) into the tank and it figures out how to burn it. Currently it maps a media file to a filename by using the author/album/title tags for audio or the title tag for video; audio files are named like "The Who/Who Are You/Squeezebox.flac". </li><li>Transcoding. Not all devices play all device types. So ingested content also needs to be transcoded into alternate formats, which are maintained as parallel directory trees. The transcoded trees are transient; they are merely shadows of the "authoritative" tree. Some files may need be transcoded to multiple formats; for example, video files ripped from DVD or transferred from TiVo might be transcoded to 480 x 320 video for iPhone but 320 x 240 for the older video iPods. </li><li>Syncing. I use the Windows program "Tag&Rename" to edit the metadata tags on my media files, to normalize genres, naming details like "The Cars" vs "Cars, The", "Vol 1" vs "Disk Two", etc. When I edit the metadata on an "original" file, I'd like the file to be renamed accordingly, and metadata changes to be reflected in the transcoded copies. When I delete an original, I want the transcoded copy to go away. Etc.</li><li>Device management. I would like to have a single directory for each device type, that I can point device-specific library management software (iTunes, Squeezecenter, Creative Explorer) at, and it will see the right view of the media library for that device (will only see files it can play; will see them in the "best" format available for that device.) </li></ul><p>One thing it does not do yet is manage the integration of your external media library into iTunes (iTunes is particularly bad at dealing with files you didn't acquire through iTunes.) </p><p>See more in the README file here: <a href="http://github.com/briangoetz/ztunes/blob/master/README">http://github.com/briangoetz/ztunes/blob/master/README</a></p><p>I'm currently using this to manage a library of ~8,000 media files in half a dozen formats. I'd love to get some more users -- drop me a note if you're interested! </p>Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com6tag:blogger.com,1999:blog-8803652895687747114.post-59538136179014003452010-01-08T14:46:00.000-08:002010-05-15T19:18:55.577-07:00A busman's holiday<p>Since I've been working way too hard, of course I decided to spend my XMas break...programming. (<a href="http://www.answers.com/topic/busman-s-holiday">http://www.answers.com/topic/busman-s-holiday</a>). I had two goals: rewrite my digital-media handling software, and learn Ruby. I'm pretty happy with what I accomplished on both counts.</p><p>The motivation to rewrite my digital-media scripts came from having too many conversations like the one below with Stuart Marks:</p><p>SM: Hey, you wrote a bunch of scripts to manage audio and video files, are you willing to share them?<br />BG: Well, in theory, yes. But I'm kind of embarassed to show them to anyone...<br />SM: Let me guess. Perl?<br />BG: Yep.<br />SM: I have a Perl story...<br />BG: Don't bother -- all Perl stories end the same way.</p><p>I'll post the full details soon -- including links to the software on github -- but for now I'll just outline the problem I was trying to solve:</p><ul><li>Ingest digital media files in any format (MP3, AAC, WMA, WAV, FLAC, M4A, M4V, WMV, MP4, etc)</li><li>File them into a library based on their metadata</li><li>Additionally transcode them down to one or more "compressed" formats (MP3 for audio, iPhone-sized Mp4 for video) for memory-constrained devices, without letting go of the original</li><li>Organize them so that each device (iPod, Squeezebox, non-iPod MP3 player) can play all the media, in the best format that the device can recognize natively (Squeezebox supports MP3, WMA, and FLAC; iPod supports MP3 and AAC; Zen supports MP3 and WMA) or a transcoded form if it can't. For example, for a given track whose source form is WMA, Squeezebox and Zen should see the WMA but iPod should see the MP3; for a track in FLAC, Squeezebox should see the FLAC but iPod/Zen should see the transcoded MP3. </li></ul><p> </p><p> </p>Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com0tag:blogger.com,1999:blog-8803652895687747114.post-70183888003614669112009-10-18T07:13:00.000-07:002010-05-15T19:18:55.580-07:00Is this a joke?We decided to upgrade the hard drive on our Tivo Series3, since the stock 250G drive only holds about 30 hours of HD video. I studied the various Tivo upgrade boards and selected a hard drive that had been recommended by some, the WD 1.5TB "EADS" Green drive. The upgrade process is simple: crack the tivo, extract the drive, move the data from the old drive to the new, "expand" the new drive (updating the partition table so it uses the extra space), and replace the drive.<br/><br/>Attempt 1: 1.5TB drive, using trusty Unix tools -- put the drives into a Unix box, copy data with dd, then do the expansion with 'mfstools'. This is the approach I've used several times in the past, with good results. Put the new drive in, turn it on, and it gets stuck forever in the "Wecome, powering up" screen. Back to Google. <br/><br/> Turns out that the S3 can't see a partition bigger than 1TB, and mfstools expands the partition to the whole rest of the drive, yielding a too-big partition. Turns out mfstools doesn't support limiting the size of the partition, but the Windows version (winmfs) does, so I'll use that instead. (Its good to have lots of spare computers around when attempting any sort of upgrade.)<br/><br/>Attempt 2: 1.5TB drive, using winmfs. Put the drives in the windows box, run winmfs to copy the data, and let winmfs expand the partition. It asks me "should I limit the partition to 1TB", I say yes, good. Put the drives back -- same problem. More Googling.<br/><br/>So I discover that "some versions of the drive I was using (WD15EADS) are 'not compatible' with Tivo Series3." Its been years since I've heard about incompatible (system, disk) pairs, and this is a standard SATA drive, but OK, I guess I bought the wrong drive. RMA time. Sorry, NewEgg. The Tivo Upgrade FAQ (<a href="http://www.tivocommunity.com/tivo-vb/showthread.php?t=370784">http://www.tivocommunity.com/tivo-vb/showthread.php?t=370784</a>) is telling me I should favor the WD EVVS drives instead, so I buy a 1TB drive (WD10EVVS) from Amazon. <br/><br/>Attempt 3: 1TB drive, winmfs. I repeat the process, copying the 250G drive to the new 1TB drive, and put the drive back in the Tivo. (At this point I've learned to try it before I fasten all the screws.) Same deal -- stuck on the "Welcome, Powering Up" screen. More Googling.<br/><br/>I found this update, which was added after I'd bought my drive:<br/><br/>The WD10EVVS was removed from the list on October 10, because there is a new<br/>batch of that drive, manufactured on September 20, that is not compatible<br/>with the TiVo. These incompatible drives are labeled as follows:<br/><br/>MDL: WD10EVVS - 63M5B0<br/>Product of Thailand<br/>DATE: 20 SEP 2009<br/>DCM: [b]HAxxxxxxxx<br/>R/N: 701640<br/>LBA: 1953525168<br/><br/>I looked at my drive, and sure enough, I had one. <br/><br/>Is this an elaborate joke? <br/><br/>Next up: RMA redux, ordered a WD 10EVDS drive. Stay tuned.<br/><br/> <br/><br/>Update: installed the WD10EVDS, worked fine. Fourth time's the charm!Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com2tag:blogger.com,1999:blog-8803652895687747114.post-71716832460022704312009-10-15T22:28:00.000-07:002010-05-15T19:18:55.582-07:00You've been scammed!I got an odd e-mail from PayPal the other day, telling me I'd paid EU250 to something called "Skype Business Panel." My first thought was that it was a phish, but careful examination suggested it was real. I logged on to my PayPal account and indeed I'd been charged EU250. What the hell is "Skype Business Panel", anyway? Turns out it is a skype feature where businesses can allocate credit to the skype accounts of their employees and thereby manage their telephone spending. <br/><br/>I have spent about $10 with skype per year, recharging my skype account from PayPal when it ran low. Somehow (don't remember) I had authorized skype to charge my PayPal account when my balance got low. And this was the vector through which I was scammed. Someone must have gotten a hold of my skype password (don't know how), logged on, and billed EU250 to my PP account (which didn't require a PayPal webflow), and then allocated it to some bogus accounts. <br/><br/>First stop: dispute the charge with PayPal. They were completely unhelpful, pointing me to the authorization and told me to work it out with skype. Fortunately skype was more helpful, and they reversed the charge immediately. <br/><br/>I then logged on to my Skype Business Control Panel (now that I know such a thing exists), and found several bogus accounts linked to mine, which I deleted. After all was said and done, including the refund, I still somehow had a EU100 balance on my BCP, meaning somehow the scammers gave me EU100. <br/><br/>To see if you have any such preapprovals on file: login into your paypal account, click "Profile", and click "Preapproved Payments." You can delete them from there.Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com2tag:blogger.com,1999:blog-8803652895687747114.post-51807013641752551552009-08-29T08:28:00.000-07:002010-05-15T19:18:55.584-07:00Laptop upgrade annoyancesWe've got an old Dell C400 laptop. Its seven or eight years old, but its still going strong, and it just fine for an around-the-house laptop (similar in performance to a modern netbook, but with bigger screen/keyboard, and still nice and light). The limiting performance factor right now seems to be the hard drive; many Windows operations (booting, shutdown, sleep, wake) are disk-seek-bound, so I bought an IDE SSD to replace the existing IDE drive. Hopefully that will also improve battery life and thermal characteristics. <br/><br/>What I'd like is a simple way to move all the data from the existing drive to the new drive, and then just toss the old drive. But this isn't as simple as it might appear. Laptop IDE cables generally only support one drive, so I can't use (say) PartitionMagic to do a partition copy the way I would on a desktop system.<br/><br/>A lot of people have suggested various tricks, like:<br/><ul><br/> <li>Get an IDE-USB adapter, put the old disk on that, put the new disk in the machine, boot from a Linux CD, and use dd to copy the data;</li><br/> <li>Get a pair of 40 pin to 44 pin IDE adapters, put them in a desktop system, and copy using PartitionMagic (Windows) or dd (Linux);</li><br/> <li>Find a dual-drive 44 pin IDE cable, plug both drives in, and hope that the OS / BIOS recognizes both disks;</li><br/> <li>Just reinstall Windows and whatever apps I have on the new drive (including chasing down all the device drivers, such as the touch pad, speakers, etc)</li><br/></ul><br/>Why is this so difficult? A hard-drive-swap should be a simple, common upgrade operation, that shouldn't require using tools from another operating system, transplanting the drives into another system, or rebuilding the world from scratch. <br/><br/>On a similar note, I just bought a Samsung NC10 netbook, and was going to wipe the disk and reinstall OSes. I have all the software I want ripped to ISO images, many of them bootable. Why is it so hard to take a bootable ISO and turn it into a bootable USB key? (I tried "unetbootin" but it didn't work on the PartitionMagic ISO, which is usually my first step in installing onto a new PC.)Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com6tag:blogger.com,1999:blog-8803652895687747114.post-32400920571510015272009-06-12T04:38:00.000-07:002010-05-15T19:18:55.585-07:00WiFi prices finally come downI've always been annoyed by just how expensive WiFi access is for such spotty coverage. I've had the T-Mobile plan ($30/mo) for a few years, which provides coverage at Starbucks (until they switched to AT&T), and many airports and hotels. Its been a better deal than not having it (they have coverage at the hotel I stay at most frequently, and the airports I transit through most frequently), but it always felt like too much money for too little service, given that there is not always a TMobile hotspot available. TMobile has roaming deals with many of the other big providers (Boingo, AT&T), but the only real benefit there is the convenience of the billing arrangement, as the roaming fees are not nominal. (Though I do like that TMobile also provides convenient pay-by-the-minute roaming access at many hotspots in Europe.) <br/><br/>Finally there seem to be some better alternatives. Boingo now seems to have an unlimited $10/month plan, so I switched to that. Boingo claims I also get free roaming on many TMobile, AT&T, and other hotspots -- I'll report on that once I get my first bill. I downgraded my TMobile account to the "Pay as you go" plan, which has no monthly fee, and is $3 for the first hour, which seems like a good option to have. <br/><br/>Starbucks also has a reasonably priced plan (Starbucks Gold Card) if you spend a lot of time in or near Starbucks (they are in the process of switching their hotspots from TMobile to AT&T.) For $25/yr, you get two hours per visit of WiFi time (not sure if this is enforced or not), plus 10% discount on most Starbucks purchases.Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com0tag:blogger.com,1999:blog-8803652895687747114.post-69710722954492384182008-11-19T10:51:00.000-08:002010-05-15T19:18:55.589-07:00Initial iPhone experience -- disappointingI live in an AT&T-free state, so I have not had access to the cult that is iPhone. But recently, in preparation for AT&T moving into the state (through an asset swap that involves AT&T acquiring the VT GSM assets that Verizon bought in acquiring Rural Cellular), they are now willing to open accounts with VT addresses. So when in CA this week, I went to an AT&T store to plunk down my money so I could be cool like all my friends. I purchased a 16GB iPhone 3G. <br/><br/>I got out of the store and into my car, and noticed that the edge where the front metal rim meets the plastic case was extremely rough -- almost sharp enough to cut. This was not the seamless tactile experience I was expecting from Apple. So I went back in the store, and asked for an exchange. I was told that "Apple prevents AT&T from making exchanges" and was sent to the Apple Store. When I arrived at the Apple Store, the rep informed me that they could make an exchange, but it would be a refurb unit, not a new one, even though mine was clearly new, because I'd bought it at an AT&T store and not an apple store. <br/><br/>So I went back to the AT&T store and argued with the manager. He tried to send me back to Apple. He ended up calling the Apple store, who must have told him to take the exchange, so in the end I got a new, non-defective phone. All was made right, but the experience was none too pleasant, involving three store visits.<br/><br/>While in the Apple store, which had many iPhones on display, I took the opportunity to do some sampling. I discovered that many iPhones had rough or sharp spots, and not all in the same places. Seems that in reducing the cost of the 3G, perhaps some quality-control corners were cut as well, since many were not very pleasing to the touch and there were significant variations in perceivable quality.Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com2tag:blogger.com,1999:blog-8803652895687747114.post-84540753335449022012008-10-15T09:53:00.000-07:002010-05-15T19:18:55.590-07:00My favorite computer science book<a href="http://www.amazon.com/gp/product/0262162091?ie=UTF8&tag=none0b69&linkCode=as2&camp=1789&creative=9325&creativeASIN=0262162091"><img src="https://images-na.ssl-images-amazon.com/images/I/51QFBB4EA0L._SL160_.jpg" border="0" alt="" /></a><img style="border:none !important; margin:0px !important;" src="http://www.assoc-amazon.com/e/ir?t=none0b69&l=as2&o=1&a=0262162091" border="0" alt="" width="1" height="1" /><br/><br/>Pierce's <a href="http://www.amazon.com/gp/product/0262162091?ie=UTF8&tag=none0b69&linkCode=as2&camp=1789&creative=9325&creativeASIN=0262162091">Types and Programming Languages</a> is a masterful introduction to the theory and practice of type systems. One of the things that makes this book so great is that it is equally accessible to both the theory-oriented and the practice-oriented. This was driven home to me in a conversation with Ola Bini, when I saw he was carrying this book, and he commented "I love this book because I can skip all the math and get what I need from the ML implementation." I answered that I liked it for the opposite reason; I was able to get everything I needed from the math and didn't have to look at the code. Its pretty impressive that a book can be that useful and successful from two such radically different reader approaches.<br/><br/>I found that Pierce's treatment was extremely accessible. He starts with almost no assumptions, introduces first the untyped lambda calculus, then the simply typed lambda calculus, some obvious extensions (records, references, subtyping, union types, functional objects, etc), operational semantics, and builds gradually to more useful type systems. Each section includes motivation, analysis, a formal description of the system, soundness proofs, and ML code; the impatient can skip some of these and still get what he's talking about. There is working code for each of the languages developed. (The type systems were developed in a system that the author wrote called <a href="http://citeseer.ist.psu.edu/cache/papers/cs2/421/http:zSzzSzwww-sop.inria.frzSzcertilabzSzLFM00zSzProceedingszSzPaperszSzPierce.pdf/levin00tinkertype.pdf">TinkerType</a>, which makes it possible to build type systems by "mixing and matching" features, and it generates both the ML code and TeX source for generating the figures used in the book -- most impressive!)<br/><br/>Not only is this book useful to anyone who is interested in the design and science of programming languages, but it is also a pleasure to read.<br/><br/>What's your favorite computer science book? (Unoriginality points for anyone who says TAOCP.)Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com16tag:blogger.com,1999:blog-8803652895687747114.post-24841664099589933622008-09-17T00:31:00.000-07:002010-05-15T19:18:55.592-07:00David Foster Wallace, RIPI was deeply saddened at the news that David Foster Wallace committed suicide last week. <br/><br/>For me, the experience of reading Wallace's writing is not unlike that of watching an olympic gymnast. While the right side of the brain is being entertained by the grace and artistry, the left side is frantically marvelling at how the human body can do that at all. The tension between the two -- where your brain can't decide where to focus, not wanting to miss either part -- adds all the more to the experience. <br/><br/>Wallace's mastery of the language is undeniable; one could read his work simply to marvel at the construction of each sentence or his ability to move effortlessly from one writing style to another. But, unlike other authors known for their "style", the writing is merely the surface layer; Wallace actually has something to say, his arguments are compelling and challenging and beautifully constructed, and supported with relevant data drawn from disciplines ranging from literary theory to mathematics. And somewhere along the line he also manages to make you laugh out loud -- right before you have to pick up the dictionary for the seventh time. <br/><br/>One is, at the same time, amazed, informed, challenged, entertained, and, honestly, filled with that feeling of "I'm not worthy" on multiple levels. <br/><br/>I would like to be able to say "I knew him when"; he and I overlapped for a year or two at Amherst. But I never actually met him, I only heard the stories, such as his senior English thesis being published as a novel ("The Broom of the System"), or being the only student in then-recent memory to have achieved the distinction of <em>summa cum laude </em>for his thesis work in two separate majors (English and Philosophy.) <br/><br/>Harper's Magazine has graciously made the pieces he published in that magazine available for free on the web: <a href="http://www.harpers.org/archive/2008/09/hbc-90003557">http://www.harpers.org/archive/2008/09/hbc-90003557</a><a href="http://www.harpers.org/#hbc-90003557"></a>. If you've not had the pleasure, I suggest you read "Tense Present" -- which probes "the seamy underbelly of US lexicography" -- and then marvel at the notion of how entertaining and actually useful a book review of a dictionary could be. <br/><br/>Rest in peace.Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com1tag:blogger.com,1999:blog-8803652895687747114.post-41147774578053576102008-06-07T06:20:00.000-07:002010-05-15T19:18:55.594-07:00Wallboard + paint + pressure = superglueThis surprised the heck out of me. We recently finished a new TV room down in the basement. We have a 50" plasma TV, mounted on the wall with an <a href="http://www.amazon.com/gp/product/B000FT1BP4/103-8172250-4738209?ie=UTF8&tag=none0b69&linkCode=xm2&camp=1789&creativeASIN=B000FT1BP4">Omnimount UCL </a> mount (quite an impressive bit of engineering -- not cheap, but highly recommended.) Since the mount weighs upwards of 40 pounds, and supports a TV that weights 100 pounds on a torque arm as long as 2ft, it needs to be anchored pretty solidly to the wall. It is held up with 6 4" long, 3/8" wide lag screws, that screw into two separate studs. It does its job well.<br/><br/>So, my dad and I went to remove it from the wall. I removed the six screws, and we prepared to catch the mount. It didn't fall off the wall. We tugged on it, and it still didn't come off the wall. Seemed stuck so tight we thought we'd missed a screw! But we convinced ourselves there were no more screws, and the two of us pulled hard, and eventually it came off the wall -- taking some of the wallboard with it. Apparently the pressure of being screwed up against the wallboard (and maybe the heat from the TV too over a few years) turned the painted surface into a glue not only strong enough to hold a 40lb mount to a vertical wall but resist being pulled off!Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com2tag:blogger.com,1999:blog-8803652895687747114.post-71237495418156169892008-06-02T06:27:00.000-07:002010-05-15T19:18:55.596-07:00Questions from the Peanut Gallery, part IAt the "Writing the Next Great Java Book" BOF at JavaOne, there were unfortunately many more questions than we had time to answer. Fortunately, Greg Doench saved the questions (presciently, they were submitted on paper), so I'll answer a few here. <br/><br/>Q: About how much time did you put into your book (effort, not duration)? <br/>A: In my case, the answers for effort and duration are the same, as I had the luxury of writing the book mostly-full-time -- I made the book my foreground activity, though I still did some consulting and training while I was working on it. I spent approximately 16 months on the book -- longer than planned (but in hindsight is no surprise.) <br/><br/>Q: Was the financial compensation worthwhile?<br/>A: Unless your name is Stephen King or JK Rowling, writing books is not something you do for the financial compensation. This is more true for technical books, because (a) the audience for books like Java Concurrency in Practice is not quite as large as the audience for Harry Potter, and (b) if you have the skills to write a good technical book you probably have the skills to get a well-paying technical job. Without going into the details, I'll say that the compensation is about what I expected -- but I went into it with very realistic expectations. The compensation comes in other forms. <br/><br/>Q: How much support and assistance was provided by the publisher?<br/>A: I think this is a matter of how much support and assistance you ask of the publisher -- and how much the publisher thinks you need. In our case, we did everything ourselves, including typesetting and managing the review, copy editing, and index creation. These are things the publisher often does for authors (and might even have preferred to do), but we chose to do it ourselves, and the publisher agreed. Of course, this was more work, but it was work we gladly did. The A-W team was always responsive when we did ask for things. So I think the answer is "as much as you appear to need."<br/><br/>Q: How does the short half-life of technical topics affect the effort?<br/>A: I deliberately chose a topic with a longer half-life. This gave me the latitude to let the book tell me when it was done, rather than the schedule. For material with a shorter shelf life, I might be inclined to choose a shorter format, so that the book is less out-of-date by the time it is published. <br/><br/>Q: What would you say about books that authors release chapters to the public as they write?<br/>A: I think this presupposes a style of writing where the author sits down and writes the book linearly. I am sure some authors do this, and some topics are more amenable to this approach than others. But one of the most important freedoms in writing is the freedom to refactor continuously; very often you don't figure out the right way to present the material until you've presented it the wrong way (just as with code.) There's nothing wrong with putting the work out there early -- this is a great source of free review -- but you have to be careful that doing so doesn't cause you to settle into the belief that the structure of the book has been decided. (The same risk is true of trying to adhere to a schedule that assigns due dates to specific chapters.) <br/><br/>Q: How do you avoid example source code exploding without using unrealistic examples?<br/>A: This is really hard! But its really important. In JCiP, we set a rule for ourselves of "no code example more than a page", with the target of making most of them a half page or less. This is not easy, especially in Java! (There was only one we had to break into two separate one-page listings.) We wanted the examples to each illustrate a single point, so that the reader could look at the example and easily see what it was trying to show. There are some obvious tricks; eliminating boilerplate code like constructors, getters, and setters helps a little bit. What worked for us was to pick realistic examples that the audience would immediately understand the utility of (such as a file crawler), but abstract away the irrelevant concrete details by not showing the bodies of methods that are not needed to make the point that the example is supposed to illustrate. For example, we have a set of examples in Chapter 8 where we illustrate searching for solutions to a class of puzzles such as the "sliding block puzzles." But rather than focus on a specific puzzle -- which would take lots of space and not offer all that much insight, we abstract the nature of the puzzle by defining an interface that specifies the initial position (in terms of an abstract Position class), valid moves (in terms of an abstract Move class), and the goal position. Then we can illustrates various search techniques in terms of the abstract puzzle without getting bogged down in the details. <br/><br/>Q: What would you say is the role of technical books in the age where the Internet is the fastest way to publish texts and technology changes so fast that one year after publishing texts become irrelevant?<br/>A: Some technical books are simply a form of documentation; any book that has a version number in the title is likely to fall into this category. These books have a very short shelf life. Other books, those that tend to focus on concepts rather that specific technical details, tend to have a longer shelf life. In any case, the publishing industry needs to become more agile in its approach to managing the authoring and production process, and explore more seriously alternate publication vectors such as electronic publishing. <br/><br/>More later.Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com0tag:blogger.com,1999:blog-8803652895687747114.post-44752696926495033862008-05-31T05:29:00.000-07:002010-05-15T19:18:55.599-07:00The making of JCiP; avoiding errors in code listingsAt JavaOne this month, my publisher at Pearson, Greg Doench, hosted a panel/BOF entitled "Writing the Next Great Java Book." Not surprisingly, when you get four authors in a room, you get seven opinions about what's important in writing a book!<br/><br/>One thing that we (mostly Tim, actually) did for JCiP was set up an infrastructure for the book, similar to what you'd do for a software project. Version control, issue tracking, one-step build script, continuous build -- all of these things offer the same benefits to book projects as they do for software.<br/><br/>One critical aspect of the build is the handling of program listings. It is incredibly tempting to cut and paste examples from the IDE into whatever source format you're writing in (Word, Frame, LaTeX, DocBook), but this is a recipe for disaster -- errors will invariably creep in as you try and make small tweaks (such as changing variable names) outside the IDE. And code examples with errors really undermine the reader's confidence (or worse, they copy the incorrect example into their code.) So, we wanted to make sure that every code example compiled (and ideally, was tested.)<br/><br/>Our approach was to check the code into Subversion with the rest of the book artifacts, ensure that the build process compiled the code and ran the unit tests, and then automatically extract the examples from the code in a format into which they could be directly included by the build. Some systems (LaTeX, DocBook) make this sort of inclusion easier than others.<br/><br/>We marked the examples up with comments for formatting (bold, italic) and also with "snip here" comments that excluded the irrelevant portions of the code from the listings that actually went into the book. The attached perl script (<a href="http://www.briangoetz.com/blog/wp-content/uploads/2008/05/phragmite.pl">phragmite.pl</a>), written by Tim Peierls (based on an approach designed by Ken Arnold), takes as input a set of input files and produces a set of LaTeX files representing the extracted listings.<br/><br/>As an example, here is the Counter listing from Listing 4.1 of JCiP:<br/><pre>// !! Counter Simple thread-safe counter using the Java monitor pattern<br/>// vv Counter<br/>@ThreadSafe<br/>public final class Counter {<br/> /*[*/@GuardedBy("this")/*]*/ private long value = 0;<br/> public /*[*/synchronized/*]*/ long getValue() {<br/> return value;<br/> }<br/> public /*[*/synchronized/*]*/ long increment() {<br/> if (value == Long.MAX_VALUE)<br/> throw new IllegalStateException("counter overflow");<br/> return ++value;<br/> }<br/>}<br/>// ^^ Counter</pre><br/>The first line identifies the type of the code fragment (!! for a "good example", ?? for a "bad example" which would get decorated with a Mr. Yuk), the name of the fragment (Counter), and the listing caption. The lines with the ^^ and vv mean "snip from here to here", and a listing can be made of multiple such fragments. The /*[*/ and /*]*/ comments mean "bold". The following ANT target ran the script:<br/><pre> <target name="listings"><br/> <exec dir="${bin.dir}" executable="perl"><br/> <arg value="${phragmite.pl}"/><br/> <arg value="${listings.dir}"/><br/> <arg value="${fragments.dir}/*.java"/><br/> <arg value="${fragments.dir}/jcip/*.java"/><br/> </exec><br/> </target></pre><br/>In the book's LaTeX source, we use the following LaTeX macro to pull the listing in:<br/><br/><code>\newcommand{\JavaListing}[1]{\input{listings/#1}%</code>Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com5tag:blogger.com,1999:blog-8803652895687747114.post-87971552950201771212008-05-17T03:53:00.000-07:002010-05-15T19:18:55.603-07:00Apologies for the malware warningsApparently, WordPress is vulnerable to some script injection bugs, and this site was hit by them. And Google tagged the site as "spreading malware", so the site shows up with a warning in Google search results and FF3 users can't get to it at all. I've upgraded Wordpress, scoured the DB for injected scripts, and am in the process of begging google to let me off the blacklist.<br/><br/>What a pain in the butt. People suck.Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com1tag:blogger.com,1999:blog-8803652895687747114.post-45039341216082405132008-04-30T14:48:00.000-07:002010-05-15T19:18:55.605-07:00Tailored, indeedI noticed this amusing typo at the trade show floor at SDWest this year. My first thought: wow, they really are tailored!<br/><br/> <a href="http://www.briangoetz.com/blog/wp-content/uploads/2008/04/tailored_indeed.jpg" title="tailored_indeed.jpg"><img src="http://www.briangoetz.com/blog/wp-content/uploads/2008/04/tailored_indeed.jpg" alt="tailored_indeed.jpg" /></a>Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com1tag:blogger.com,1999:blog-8803652895687747114.post-21348646387240509142008-04-30T14:45:00.000-07:002010-05-15T19:18:55.606-07:00Hacker at work<a href="http://www.briangoetz.com/blog/wp-content/uploads/2008/05/photo_030208_002.jpg" title="HackerAtWork"><img src="http://www.briangoetz.com/blog/wp-content/uploads/2008/05/photo_030208_002.thumbnail.jpg" alt="HackerAtWork" /></a><a href="http://www.briangoetz.com/blog/wp-content/uploads/2008/04/hackeratwork.jpg" title="Hacker at work"></a><br/><br/>This photo was taken in Josh Bloch's garage on a recent trip to CA. We were installing a new fireplace grille after the fireplace had been re-faced with some beautiful vintage tiles. Even though this was an entirely analog activity, we still managed to get a little hacking in.Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com2tag:blogger.com,1999:blog-8803652895687747114.post-25356915185917291052007-12-10T09:23:00.000-08:002010-05-15T19:18:55.612-07:00A day and a half in the life...This is a story about Windows, Samba, debugging, and frustration. I just spent a day and a half debugging a very annoying performance problem with a brand-new system.<br/><br/>Background: I just recently bought a new desktop computer, which I do about every two years. Computers should last longer than that, but I find that, even with good "hygeine", Windows systems tend to decay to the point where they exhibit weird behaviors after about two years, for which the cure is a complete reinstall of the OS and all applications. (Pause for Mac fanboys to snicker.) The "rebuild the world" process wouldn't be so bad if it weren't so hard to migrate all one's data -- even given the fact that a lot of my data is already in subversion. The real problem is that each application sprays its configuration and data randomly around your system, whether in the program install folder, registry, documents and settings folder, local settings folder, etc.<br/><br/>So, I bought a new computer, an extra-quiet one from <a HREF="http://www.endpcnoise.com">www.endpcnoise.com</a>. These guys specialize in quiet systems, and since I work in a home environment, the computer is usually the noisiest thing in the room. (Of particular annoyance is the variable speed fan on my existng Dell system, which, every time the system worked up a sweat, made a whiny noise. And we was fined $50 and had to pick up the garbage in the snow, but thats not what I came to tell you about.) Pretty happy with the new system overall.<br/><br/>So, I installed XP Pro on the new system, and proceeded to install all my applications, utilities, and all kinds of groovy things that we were talking about on the bench. And then I got to the part where I tried to use it; specifically, tried to fire up IDEA and build the project I'm working on.<br/><br/>Now, I've got a somewhat weird setup; when developing on a project, I checkout a workspace on my Linux server, which is served via SAMBA to my local network, and I run the IDE on my Windows desktop and point the IDE at my samba share. There's a measurable performance hit vs local, but I like being able to do some things from the Linux command line and other things from the IDE, so overall its a more productive setup for me.<br/><br/>When you ask the IDE to "make" the project, it crawls the files in the project checking their modification times. An "empty" make on a project with ~1000 files generally takes a few seconds to figure out that there's nothing to do. But when I set it up, the new (faster) system took about 30-60s to do an empty make.<br/><br/>OK, it's debugging time. What's different about the two systems? Same OS (XP Pro), same service pack level, both systems up to date on patches, same Java version, same IDE version, same user credentials, no host-specific information in my SAMBA config. Different hardware. Make sure my ethernet drivers are up-to-date. Test network for errors, swap cables, all that. Run IOMeter, found that both get similar throughput for large files on the same SAMBA share.<br/><br/>Crank up perfmon, which tells me that the new system is sending out more packets for a make than the old one. OK, crank up ethereal, get a packet capture, and find that the new system is sending/receiving 10x as many packets for the same operation:<br/><br/><em>[brian@brian-server ~]$ wc -l /tmp/*cap<br/>258204 /tmp/new-cap<br/>17719 /tmp/old-cap<br/></em><br/><br/>So, what's the difference? Let's look at the packet capture. In the old trace, for each file being probed, it did something like this:<br/><br/><em> 0.467895 192.168.1.104 -> 192.168.1.107 SMB Trans2 Request, QUERY_PATH_INFO, Query File Basic Info, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api\tree\OnDeleteElementTree.class<br/>0.468041 192.168.1.107 -> 192.168.1.104 SMB Trans2 Response, QUERY_PATH_INFO<br/>0.468283 192.168.1.104 -> 192.168.1.107 SMB Trans2 Request, QUERY_PATH_INFO, Query File Network Open Info, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api\tree\OnDeleteElementTree.class<br/>0.468402 192.168.1.107 -> 192.168.1.104 SMB Trans2 Response, QUERY_PATH_INFO</em><br/><br/>Two requests, two responses per file. Seemed reasonable. On the new system, for each file:<br/><br/><em> 2.010471 192.168.1.113 -> 192.168.1.107<br/>SMB NT Create AndX Request, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api\tree<br/>2.010698 192.168.1.107 -> 192.168.1.113 SMB NT Create AndX Response, FID: 0x2754<br/>2.010900 192.168.1.113 -> 192.168.1.107 SMB NT Create AndX Request, Path: \<br/>2.011011 192.168.1.107 -> 192.168.1.113 SMB NT Create AndX Response, FID: 0x2755<br/>2.011237 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, FIND_FIRST2, Pattern: \work<br/>2.011570 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, FIND_FIRST2, Files: work<br/>2.011752 192.168.1.113 -> 192.168.1.107 SMB Close Request, FID: 0x2755<br/>2.011833 192.168.1.107 -> 192.168.1.113 SMB Close Response<br/>2.012025 192.168.1.113 -> 192.168.1.107 SMB NT Create AndX Request, Path: \work\openjfx-compiler<br/>2.012157 192.168.1.107 -> 192.168.1.113 SMB NT Create AndX Response, FID: 0x2756<br/>2.012353 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, FIND_FIRST2, Pattern: \work\openjfx-compiler\classes<br/>2.012631 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, FIND_FIRST2, Files: classes<br/>2.012796 192.168.1.113 -> 192.168.1.107 SMB Close Request, FID: 0x2756<br/>2.012897 192.168.1.107 -> 192.168.1.113 SMB Close Response<br/>2.013100 192.168.1.113 -> 192.168.1.107 SMB NT Create AndX Request, Path: \work\openjfx-compiler\classes\production\openjfx-compiler<br/>2.013239 192.168.1.107 -> 192.168.1.113 SMB NT Create AndX Response, FID: 0x2757<br/>2.013445 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, FIND_FIRST2, Pattern: \work\openjfx-compiler\classes\production\openjfx-compiler\com<br/>2.013894 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, FIND_FIRST2, Files: com<br/>2.014095 192.168.1.113 -> 192.168.1.107 SMB Close Request, FID: 0x2757<br/>2.014174 192.168.1.107 -> 192.168.1.113 SMB Close Response<br/>2.014355 192.168.1.113 -> 192.168.1.107 SMB NT Create AndX Request, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com<br/>2.014504 192.168.1.107 -> 192.168.1.113 SMB NT Create AndX Response, FID: 0x2758<br/>2.014962 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, FIND_FIRST2, Pattern: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun<br/>2.015169 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, FIND_FIRST2, Files: sun<br/>2.015339 192.168.1.113 -> 192.168.1.107 SMB Close Request, FID: 0x2758<br/>2.015428 192.168.1.107 -> 192.168.1.113 SMB Close Response<br/>2.015633 192.168.1.113 -> 192.168.1.107 SMB NT Create AndX Request, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun<br/>2.015764 192.168.1.107 -> 192.168.1.113 SMB NT Create AndX Response, FID: 0x2759<br/>2.015980 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, FIND_FIRST2, Pattern: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx<br/>2.016221 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, FIND_FIRST2, Files: javafx<br/>2.016402 192.168.1.113 -> 192.168.1.107 SMB Close Request, FID: 0x2759<br/>2.016493 192.168.1.107 -> 192.168.1.113 SMB Close Response<br/>2.016693 192.168.1.113 -> 192.168.1.107 SMB NT Create AndX Request, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx<br/>2.016827 192.168.1.107 -> 192.168.1.113 SMB NT Create AndX Response, FID: 0x275a<br/>2.017096 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, FIND_FIRST2, Pattern: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api<br/>2.017348 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, FIND_FIRST2, Files: api<br/>2.017520 192.168.1.113 -> 192.168.1.107 SMB Close Request, FID: 0x275a<br/>2.017590 192.168.1.107 -> 192.168.1.113 SMB Close Response<br/>2.017803 192.168.1.113 -> 192.168.1.107 SMB NT Create AndX Request, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api<br/>2.017919 192.168.1.107 -> 192.168.1.113 SMB NT Create AndX Response, FID: 0x275b<br/>2.018133 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, FIND_FIRST2, Pattern: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api\tree<br/>2.018389 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, FIND_FIRST2, Files: tree<br/>2.018547 192.168.1.113 -> 192.168.1.107 SMB Close Request, FID: 0x275b<br/>2.018626 192.168.1.107 -> 192.168.1.113 SMB Close Response<br/>2.018779 192.168.1.113 -> 192.168.1.107 SMB Close Request, FID: 0x2754<br/>2.018851 192.168.1.107 -> 192.168.1.113 SMB Close Response<br/>2.019157 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, QUERY_PATH_INFO, Query File Basic Info, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api\tree\OnDeleteElementTree.class<br/>2.019292 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, QUERY_PATH_INFO<br/>2.019495 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, QUERY_PATH_INFO, Query File Standard Info, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api\tree\OnDeleteElementTree.class<br/>2.019613 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, QUERY_PATH_INFO<br/>2.019832 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, QUERY_PATH_INFO, Query File Internal Info, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api\tree\OnDeleteElementTree.class<br/>2.019960 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, QUERY_PATH_INFO<br/>2.020206 192.168.1.113 -> 192.168.1.107 SMB Trans2 Request, QUERY_PATH_INFO, Query File Network Open Info, Path: \work\openjfx-compiler\classes\production\openjfx-compiler\com\sun\javafx\api\tree\OnDeleteElementTree.class<br/>2.020316 192.168.1.107 -> 192.168.1.113 SMB Trans2 Response, QUERY_PATH_INFO</em><br/><br/>For those of you who don't enjoy reading packet dumps (I hope that's all of you), what's going on here is that it does some sort of complicated multipacket transaction for each level of directory from the project root down to the last directory in the chain, and then does four request-responses for each file. And it repeats the directory stuff for every file, even though it just asked that.<br/><br/>OK, more debugging -- what could cause a system to deviate from the standard file system client behavior? Check all the network control panel settings, they're all the same. Spend several hours googling through the MS knowledge base for file sharing related problems, look at the various registry keys and file versions mentioned, nope, none of them are helpful. Google for people who have had similar problems. Many have, but no one reported a solution that works, except one person, who mentioned that their network behavior changed when they changed versions of Symantec antivirus. Well, I don't run Symantec AV, but I do run ZoneAlarm. And I do have different versions -- ZA Antivirus on the old system, ZA Suite on the new. Seems like a small difference -- they're clearly built on the same base technology -- but lets try it. Disabled ZA, reboot, make sure its not running, and run my IDE again -- no change. Still an annoying 30-60s delay before it figures out there's nothing to rebuild.<br/><br/>At this point, I asked my friends for help. Lots of sympathy. Lots of "check this, check that", but very little advice that actually moved me towards a solution (sorry, guys).<br/><br/>That post about the guy with the Symantec problem gnawed at me, though. I know security programs intercept a lot of network traffic, so the theory was perfectly plausible, and the best theory I had so far. I did the "disable ZA" thing again, rebooted, and cranked up <a HREF="http://www.resplendence.com/hookanalyzer">Rootkit Hook Analyzer </a>to see if ZA still had anything hooked, and it did, even though there were no ZA processes running and the ZA TrueVector service was stopped. So, I ran the uninstaller for ZA Suite, rebooted, checked with RHA to see that everything was unhooked (it was), and ran my IDE test again -- and this time, sweet success!<br/><br/>So, the conclusion is that the ZA Suite interferes with file sharing client behavior in a rather fundamental way (but one which only has a noticeable affect when dealing with lots of small files).<br/><br/>So, my system is temporarily defenseless against malware while I decide what to do. Why on earth would ZA rewrite the file system client packet stream like that? I want to send them a bill for that day and a half.Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com14tag:blogger.com,1999:blog-8803652895687747114.post-25075197633580998202007-06-19T02:07:00.000-07:002010-05-15T19:18:55.617-07:00Remove checked exceptions?Recently, Neal Gafter mused about whether we should consider <a href="http://gafter.blogspot.com/2007/05/removing-language-features.html">removing checked exceptions from Java</a>. The motivation from this was not what you might expect, but rather an observation that checked exceptions interacts heavily with a lot of other language features, and that evolving the language might be easier if we were willing to consider removing some features. (Neal knows this won't ever happen, he's just trying to get us thinking about Life After Java.) Not surprisingly, it generated a storm of comments, ranging from "hell yeah!" to "hell no!".<br/><br/>This isn't a new topic; it comes around every few years. A few years back I <a href="http://www.ibm.com/developerworks/java/library/j-jtp05254.html">wrote about </a>the debate surrounding checked exceptions, and the debate continues to rage. My problem is that I think most of the vocal opponents of checked exceptions are objecting for the wrong reasons (back then, I wrote: "My opinion is that, while properly using exceptions certainly has its challenges and that bad examples of exception usage abound, most of the people who agree [ that checked exceptions are a bad idea ] are doing so for the wrong reason, in the same way that a politician who ran on a platform of universal subsidized access to chocolate would get a lot of votes from 10-year-olds").<br/><br/>Reading through the against-checked-exceptions commenters on Neal's blog, we can divide them into three primary groups:<br/><ol><br/> <li>"I don't like checked exceptions because they're too much work." </li><br/> <li>"Checked exceptions were a nice idea in theory, but using them correctly makes your code really ugly, and I'm left with a choice of ugly code or wrong code, and that seems a bad choice." </li><br/> <li>"Checked exceptions are a good idea, but the world isn't ready for them." (Frequent refrain from this group: "Man, have you <em>looked </em>at some of the code out there?")</li><br/></ol><br/>To the people in camp (1), I say: engineering is hard -- get over it. Error handling is one of the hardest things to get right, and one of the easiest things to be lazy about. If you're writing code that's supposed to work more than "most of the time", you're supposed to be spending time thinking about error handling. And, pay for your own damn chocolate. <br/><br/>To the people in camp (2), I have more sympathy. Exceptions do make your code ugly, and proper exception handling can make your code really ugly. This is a shame, because exceptions were intended to reduce the amount of error-handling code that developers have to write. (Ever try to properly close a JDBC Connection, Statement, and ResultSet? It requires three finally blocks. Ugly if you do it right. But, almost no one ever does it right. (The real culprit here is that close() throws an exception -- what are you supposed to do with that exception? But that's fish under the bridge.)) <br/><br/>But perhaps there's a way to not throw the baby out with the bathwater, by providing better exception handling mechanisms that are less ugly. Dependency injection frameworks did a lot of that for us already, for a large class of applications -- and the code got a lot prettier, easier to write, and easier to read. AFAICS, the two biggest removable uglinesses of exceptions are repeated identical catch clauses and exception chaining. <br/><br/>The repeated catch clause problem is when you call a method that might throw exceptions A, B, C, and D, which do not have a common parent other than Exception, but you handle them all the same way. (Reflection is a major offender here.) <br/><br/>public void addInstance(String className) {<br/> try {<br/> Class clazz = Class.forName(className);<br/> objectSet.add(clazz.newInstance());<br/> }<br/> catch (IllegalAccessException e) {<br/> logger.log("Exception in addInstance", e);<br/> }<br/> catch (InstantiationException e) {<br/> logger.log("Exception in addInstance", e);<br/> }<br/> catch (ClassNotFoundException e) {<br/> logger.log("Exception in addInstance", e);<br/> }<br/>}<br/><br/>You'd like to fold the catch clauses together, because duplicated code is bad. Some people simply catch Exception, but this has a different meaning -- because RuntimeException extends Exception, you're also sweeping up unchecked exceptions accidentally. You can explicitly catch and rethrow RuntimeException before catching Exception -- but its easy to forget to do that.<br/><br/>public void addInstance(String className) {<br/> try {<br/> Class clazz = Class.forName(className);<br/> objectSet.add(clazz.newInstance());<br/> }<br/> catch (RuntimeException e) {<br/> throw e;<br/> }<br/> catch (Exception e) {<br/> logger.log("Exception in newInstance", e);<br/> }<br/>}<br/><br/>My proposal for this problem is to allow disjunctive type bounds on catch clauses:<br/><br/>public void addInstance(String className) {<br/> try {<br/> Class clazz = Class.forName(className);<br/> objectSet.add(clazz.newInstance());<br/> }<br/> catch (IllegalAccessException | InstantiationException | ClassNotFoundException e) {<br/> logger.log("Exception in addInstance", e);<br/> }<br/>}<br/><br/>My compiler friends tell me that this isn't too hard. <br/><br/>The other big ugliness with exceptions is wrapping and rethrowing:<br/><br/>public void findFoo(String className) throws NoSuchFooException {<br/> try {<br/> lookupFooInDatabase(name);<br/> }<br/> catch (SQLException e) {<br/> throw new NoSuchFooException("Cannot find foo " + name, e);<br/> }<br/>}<br/><br/>Now, the wrap-and-rethrow technique is very effective -- it allows methods to throw exceptions that are at an abstraction level commensurate with what the method is supposed to do, not how it is implemented, and it allows you to reimplement without destabilizing method signatures. But it adds a lot of bulk to the code. Since this is such a common pattern, couldn't it be solved with some sort of declarative "rethrows" clause:<br/><br/>public void findFoo(String className) throws NoSuchFooException<br/>rethrows SQLException as NoSuchFooException {<br/> lookupFooInDatabase(name);<br/>}<br/><br/>The rethrows clause is part of the implementation, not the signature, so maybe it goes somewhere else, but the idea is clear: if someone tries to throw an X out of this, wrap it with a Y and rethrow it. <br/><br/>An alternate approach to this would be possible with closures and reified generics; it would be possible to write a pseudo-control construct that said "execute this closure but if it throws X, wrap it with a Y and rethrow it." Unfortunately, with the current state of generics, we can't write such a generic method, we'd have to write a separate one for each exception type we want to wrap. <br/><br/>These approaches focus on the symptom -- because the arguments in group (2) are about symptoms. If we could alleviate the symptoms, people might grumble less.<br/><br/>The people in camp (3) are saying something slightly different. I don't really have an answer for them, because what they seem to be saying is that no matter what mechanism you give people for dealing with failure, they won't follow it. Checked exceptions were a reaction, in part, to the fact that it was too easy to ignore an error return code in C, so the language made it harder to ignore. This works on a lot of programmers who are slightly lazy but know that ignoring exceptions is unacceptable, but apparently is worse than nothing for some parts of the population. (We'd like to take away their coding rights, but we can't.) <br/><br/> Checked exceptions <em>are </em>a pain, and in some frameworks (like EJB before dependency injection), can be really painful. Once the ratio of "real code" to "error handling code" rises above some threshold, readability suffers greatly, and readability is a fundamental value in the Java language design. Even if the IDE generates the boilerplate for you, you still have to look at it, and there's a lot of noise. <br/><br/>On the other hand, my experiences using third party C++ libraries was even more painful than anything Java exceptions have ever subjected me to. Virtually no packages ever documented what might be thrown, so you end up playing "whack a mole" when exceptions did pop up -- and usually at your customers's site. If people are not forced to document what errors their code throws, they won't -- especially the people that the people in camp (3) are afraid of. As long as those folks are allowed to code, the value we get from checked exceptions forcing developers to document failure modes overwhelms the annoyances.<br/><br/>But, as I said above, I think many of the annoyances can be removed by adding a small number of exception streamlining constructs. This doesn't help Neal with simplifying closures, but it does help us get our job done with a little less pain. <br/><br/>Finally, a meta-note -- its really easy to misinterpret the volume of support for "removing checked exceptions" as any sort of gauge for community consensus. We're intimately familiar with the pain that checked exceptions cause; we're substantially less familiar with the pain that they free us from. (Obviously, neither approach is perfect, otherwise there'd be no debate.)Brianhttp://www.blogger.com/profile/04667736036423173869noreply@blogger.com15