hardware

The Dell 43″ 4K monitor

Posted by mitch on June 05, 2016
hardware

I was sitting on a plane on May 20th waiting to take off back home to Boston when I read about Dell’s release of the P4317Q, a 43″ 3840×2160 monster. I managed to order it before we got in the air.

I had been running the Dell 34″ curved monitor with a 27″ in portrait. The wideness of the 34″ was fantastic–it’s easy to see 3 pages of text side by side by side. However, the 34″ was frustrating for Xcode work–the 1440 height isn’t ideal for most of my work vs the standard 1600 pixel height of 30″ displays. Although I’ve sold off my original 3×30 Apple displays, I still have and use 3 HP 30″ displays, 2 in one office and 1 in another office.

But mostly I just want the biggest workspace possible, and Dell has finally delivered a big workspace. The 43″ runs about 104 pixels per inch, which is comparable to the 30″ 2560×1600 of roughly 100 pixels per inch. This means there’s not a “Retina”-style HiDPI tightness to text, but instead a larger viewing area.

Initially I got extremely motion sick from the monitor–It turned out my Gunnar computer glasses were making me sick with this large display. The motion sickness happened within 5 minutes.

Since taking them off and spending about 20 hours with the monitor, I’ve found it’s almost exactly what you’d expect–a beautiful, expansive, stunning display with one major caveat: It’s not curved, which means the corners of the screen are about 7″ further away from my eyes than the middle of the screen. Hopefully a curved version is coming. A smaller challenge is that the top of the screen is “too high” if I slouch in my chair–I think the monitor actually works better when my standing desk is elevated, but this is relatively minor.

I tried to take some pictures to capture how big this screen is, but nothing really pulls it off. Below is a picture showing a 4 page Word doc at 150% Zoom–the pages are slightly larger on the screen than printed. Without tool bars, it’s quite possible to squeeze in almost 8 pages on the screen in a 4×2 grid without much reduction in size.

If you have room for it, are tired of bezels breaking up the view, this could be a good way to go. I’ll definitely upgrade when someone comes out with a curved version or something even wider–I’d love to have a 50″ curved display, perhaps on the order of 5440×2160.

IMG_1835

Tags: , ,

What’s the deal with printers?

Posted by mitch on May 25, 2015
hardware, productivity

In 2001, Lexmark offered a PostScript USB printer for $399. No networking, but laser! For under $400!

I bought it. The printer couldn’t print straight (the paper tray was poorly designed), but it was laser! For under $400! On my desk! And it worked with lpd, which meant all of my computers could print (but not parallel to the edges of the page). Data Domain, in 2002 and 2003, actually had two of these same Lexmarks, slightly newer with some tweaks that seemed to fix the paper tray issue.

By about 2004 or 2005, Lexmark had a new personal laser printer, which I picked up for about $280. It could print straight and it was fast. Great printer.

In 2007, I moved to Boston and gave that printer away and bought a Canon all-in-one. It could scan to email or USB stick and create PDFs, photocopy, etc–it was fantastic and it was only $400 or so. Except that the fonts and text quality were quite bad, but other than that…

In 2009, I was preparing for and stressing over a set of presentations with millions of dollars on the line. I was worried about printing slides and bought a color Lexmark laser printer, I think for about $500. I printed my slides (50 pages or so) and didn’t use the printer for about 13 months. When I went to use it again, it had some internal error that apparently meant the printer was now a large boat anchor. I had kept the Canon and just kept using it.

In 2012, I had a job interview. I took my résumé printed on that Canon printer with me. I was so embarrassed at the text quality, I didn’t hand it to the interviewer. On my way home, I bought a beefy Brother color laser printer and eventually added the second paper tray and upgraded the RAM to 384 MB. The print quality for graphics was good (not great), and for text it was awesome. The Brother system cost me about $650.

The only issue with the Brother was that it often could take 2-5 minutes to warm up before printing. So if I was on the phone and wanted to print something and write down notes, the call could be well on its way by the time I got my document out of the printer. The other issue was that the Brother was a printer only and the Canon was getting long in the tooth–5 years of poor quality copies, no support for TLS-protected emails made it difficult to use for scanning–it was time to upgrade.

So I bought a monster HP color laser all-in-one with the huge extra paper tray and rolling stand. It cost about $1500 all told. When I printed a color document and compared it to the Brother, I was blown away–the HP graphics are just awesome. It can print 30 pages before the Brother wakes up to start printing 1. (No kidding!) It works with the Mac Image Capture app for both the flatbed scanner and the document feeder.

But… the HP doesn’t reliably wake from sleep over the LAN. It has issues with Chome and PDFs from time to time. The paper tray design is the opposite of what I want–it can hold 250 8.5×11 sheets and 500 8.5×14. I want 250 8.5×14 and 500 8.5×11. Seriously HP, get it together. Its 256 MB of RAM isn’t upgradeable (unreal, I couldn’t believe that). I’ve ended up stringing a USB cable across the office temporarily, since the networking doesn’t work (essentially).

During this time, due to the cost of the HP color toner, I bought a $150 Brother laser for my family to use. It’s black and white, takes up minimal space, it’s fast as heck, uses little electricity, and the text quality is better than the Canon–it’s a great little printer! I kind of want one for my office! But of course, no color, copier, or alternative paper trays.

Let’s review the issues for a device that is supposed to print:

  1. Doesn’t print straight [Lexmark #1]
  2. Poor text quality from a b&w printer [Canon]
  3. Total cost of ownership was $10/page, then required to throw away 60 lb of metal and plastic [Lexmark #2]
  4. 2-5 minutes to warm up! [Brother #1]
  5. Unreliable networking on a workgroup printer, stupid paper tray design, etc. [HP]

Is this so hard? I’ve bought 7 laser printers in the last 15 years and only 2 of them seemed to be good…and they were at the bottom of the market. It makes no sense and it’s frustrating.

Your Data Center Will Be Smaller Than Your Childhood Bedroom

Posted by mitch on May 19, 2015
business, hardware, software

I saw a tweet from Chris Mellor referencing a Gartner conclusion that mid-market IT will go to the cloud:

Today, Storage Newsletter‘s headline quotes an IDC report that personal and low-end storage sales (less than 12 bays) have declined 6.4% y/y. Some dimensions of the business sank 20% y/y.

What happened in the last year? Do people create less data than they did a year ago? Isn’t data storage growing tremendously?

What is changing is where people create and keep their data. As applications move to the cloud, the data goes with it. From Salesforce to consumer photos, from corporate email to songs, all of this stuff is in someone else’s data center.

I have about 100 TB of disks in my house across six fast hardware RAIDs, but all of my critical working set lives in the cloud. The cloud pricing for large amounts of data (e.g., 1 TB) is so cheap that it’s free or bundled (Office 365, Flickr). Dropbox stands alone as an outlier to a priced service and it’s not that expensive–certainly I cannot buy a 1 TB drive and operate it for 1 year at the price point that Dropbox offers.

Generally, IT vendors fail to deliver on simplicity; it’s not in their vocabulary. I’ve been in those meetings–hundreds of them, actually–where engineers want to offer every option for the customer and for some reason (lack of vision?) the product manager lets it happen. The problem with these meetings is that everyone in them usually forgets that while the product is the most important thing in the lives of the folks creating the products, the customers have other things on their minds.

So we end up with these overly complex IT products that are impossible to use. Quick, how do you set up Postgres database backups with Tivoli? I have no idea but I know it will take a few hours to figure it out (if I am lucky). The brilliance of Amazon’s cloud user interface is that (1) the details are hidden and (2) the user is presented with just the critical options. Do you want to back up this database? Sure! Great, when? Hey, you know, I don’t really care. Just keep the backups for 30 days.

aws-screenshot

One of the most powerful things about AWS is that significant infrastructure is under a single pane of glass. This has been the Holy Grail of IT but never realized. OpenView, System Center, vCenter, TSM–everyone wants to do it, but few organizations pull it off, likely due to a mix of political, technical, and economical reasons.

The best part of Gmail going down is that it’s not my problem to bring it back online. Remember when you worked at a place that ran Exchange and the guy in charge of Exchange was always on edge? The only reason that guy is on edge now is that he is waiting for a call to see if he got the job at a place that has switched to Gmail.

The data center of the future for most mid-market companies is a single rack consisting of network connectivity, security devices, and WAN acceleration devices. No servers or standalone storage–with applications in the cloud, the only thing needed locally is data caching to augment the WAN overhead and maybe provide short-circuit data movement among local peers. This single rack will fit into a closet.

IT will still exist; these cloud applications will still need support, maintenance, and integration–and the network issues will be as challenging as ever.

But anyone who is building IT products for on-site installation is facing a significant headwind if you’re not enabling the cloud.

Tags:

The New Mac Pro @ 8 months

Posted by mitch on August 03, 2014
hardware

I ordered my 2013 Mac Pro the day they went up for sale, even though I had an early morning flight that day. To recap, I bought a 6 core with D500, 1 TB SSD, and upgraded to 64 GB of OWC RAM. I upgraded my old 8 TB Areca RAID to 24 TB, bought an OWC Thunderbolt PCI chassis, and moved over my (old) Areca 1680x card. The OWC chassis is loud, so I also bought the 10 meter Thunderbolt cable and put the adapter box and disks in my office closet.

I had been running 3×30″ Apple displays with the cable mess that comes with the DisplayPort->DVI adapters, but recently switched out the Apple displays for my HP ZR30ws. Frankly, the HPs have a better picture, likely just due to crisper, more even lighting as a result of being 6 months old instead of 6 yrs old, but best of all, they require no adapters.

I sold my 2012 12-core Mac on Craigslist.

The highlights of the new Mac Pro are the lowered energy usage, the reclaimed physical space, and the huge reduction of cable mess. It’s disappointing going to 128 GB of RAM comes at a huge memory speed hit in the new box, but I can live with it (hoping that something better will be available by the time I need more than 64 GB).

The new Mac has only been off for about 2 days since I bought it, due to construction in my office. It’s been solid. I’m happy with the upgrade.

(I didn’t mention performance! It’s fast. The old box was fast, too. Is this one faster? Go look at my other post. I am spending most of my time crunching numbers in C++ on Linux this year, and it’s been great–especially with the 3.5 ghz single thread vs 2.4 ghz for the old 12-core–but 64 GB has been an issue for some of the calculations I am doing. But not a show stopper yet.)

The only downside that has bitten me is that there’s no locking mechanism for Thunderbolt cables–so if one falls out, and your home directory is on that Thunderbolt device (mine is), it’s very unfortunate. I’ve “solved” this with zip ties for now.

Mac Pro

Tags:

Scribbles on the New Mac Pro

Posted by mitch on January 26, 2014
hardware

A significant number of folks have asked about my thoughts on the new Mac Pro… so here we go. I promise not to tell you the same nonsense you have already read everywhere else (lighted ports, etc.).

Some background: I bought an 8-core 2008 Mac Pro on the day they were available for pre-order. It was my main workstation for years, until September 2012, when the speed and RAM ceiling became painful enough to upgrade to the “2012” Mac Pro, a 12 core 2.4 GHz machine. Clock for clock, that upgrade yielded compute performance roughly double the 2008 Mac Pro.

I wasn’t sure what to expect with that upgrade, nor was I sure what to expect with the new 2013 Mac Pro. Because of price, I elected to try a 6-core machine with the D500 video, 1 TB flash, and 64 GB of OWC RAM.

I recently ran some performance tests to see how things are going with the types of computing I do. One test is a unit test of some code I am writing. The code talks to several VMs on a third Dell VMware ESXi box and spends most of its time in select() loops. There was almost no performance difference between the old and new Macs–about 3%, which isn’t surprising.

However, I have some code that runs on local disk and does heavier CPU work. One of the pieces of code shoves a lot of data through a commercial database package inside of a VM. The VM is configured with 8 cores and 16 GB of RAM on both machines. We’ll call this Test A.

Another test does extensive CPU calculations on a multi-gigabyte dataset. The dataset is read once, computations are done and correlated. This runs on native hardware and not inside of a VM. We’ll call this Test B.

old Mac Pro1 new Mac Pro2 Retina 13″ MacBook Pro3
Test A: 65.6 seconds 38.1 seconds N/A (not enough RAM)
Test B: 82.3 seconds 52.9 seconds 67.8 seconds

1 2012 Mac Pro, 12-core 2.4 GHz, 64 GB of RAM, OWC PCIe flash
2 2013 Mac Pro, 6-core 3.5 GHz, 64 GB of RAM, Apple flash
3 2013 Retina MacBook Pro 13″, 2-core 3 GHz i7, 8 GB of RAM, Apple flash

As you can see, the new Mac does the same work in about 40% less time. The CPU work here is in the range of 1-3 cores; it doesn’t scale up to use all the available cores. To keep the tests as fair as possible, the old Mac Pro is booting from a 4-SSD RAID 0+1 and the test data lived on a OWC PCIe flash card. None of these utilize the GPUs of the old or new Macs in any fashion, nor is the code particularly optimized one way or the other. I ran the tests 3 times per machine and flushed the buffer caches before each run.

Does the Mac feel faster day to day? Maybe. In applications like Aperture, where I have 30,000 photos, scrolling and manipulation “seems” a heck of a lot better. (For reference, the old Mac has the Sapphire 3 GB 7950 Mac card. I don’t have an original Radeon 5770 to test with, having sold it.)

The cable mess behind the new Mac is the same as the old Mac. In fact, it’s really Apple’s active DVI adapters for my old Apple monitors that contribute to most of the cable mess. Once the Apple monitors start to die, that mess will go away, but until then I see little reason to upgrade.

The physical space of the new Mac pro is a significant advantage. The old Pro uses 4 sq ft of floor space w/ its external disk array. The new Pro by itself actually consumes a footprint smaller than a Mac Mini (see photo at end of this post)!

The fan is quiet, even under heavy CPU load. The top surface seems to range from 110 F — 130 F; the old Mac has a surface exhaust range from 95 — 99 F at the time I measured it. So it’s hotter to the touch, and indeed the sides of the chassis range from 91 F at the very bottom to about 96 F on average. For reference, the top of my closed Retina MacBook at the time I’m writing this is about 90 F and the metal surface of the 30″ Cinema display runs around 88 F to 90 F in my measurements (all measured with an IR non-contact thermometer).

Because there is no “front” of the new Mac Pro, you can turn it at any angle that reduces cable mess without feeling like you’ve got it out of alignment with, say, the edge of a desk. This turns out to be useful if you’re a bit particular about such things.

On storage expansion, there’s been a lot of concern about the lack of putting drives into the new Pro. Frankly, I ran my 2008 machine without any internal disks for years, instead using an Areca 1680x SAS RAID. I’m glad to see this change. There’s lots of consumer-level RAIDs out there under $1000, but I’ve given up on using them–performance is poor and integrity is often questionable.

I am backing up to a pair of 18 TB Thunderbolt Pegasus systems connected to a Mini in my basement, and bought an Areca ARC-8050 Thunderbolt RAID 8-Bay enclosure and put in 24 TB of disks for the new Pro. Sadly, while it’s fine in a closet or basement, it turns out to be too loud to sit on a desk, so I bit the bullet and ordered a 10 meter Thunderbolt cable. I haven’t received the cable yet, so I haven’t moved my data off my Areca SAS RAID in my old Pro yet. But once that is done, I expect to stop using the old 8 TB SAS RAID and just use the new RAID. These are expensive storage options, but the cheap stuff is even more expensive when it fails.

So, should you buy the new Mac Pro?

I don’t know.

For me, buying this Pro was never about upgrading from my old Pro, but rather upgrading my second workstation–a maxed out 2012 Mac Mini that struggled to drive 30″ displays and crashed regularly while doing so (it’s stable with smaller displays, but in the sample size of four or five Minis I’ve had over the years, none of them could reliably drive a 30″–Apple should really not pretend that they can). In the tests above, I’ve ignored the 900 MHz clock difference, but clearly that contributes to the performance for these kinds of tests.

What about price? This new Mac Pro ran me about $6100 with tax, shipping, and the OWC RAM upgrade. The old Mac Pro cost about $6300 for the system, PCIe flash, SSDs, brackets, video card upgrade, and OWC RAM upgrade. (The disk systems are essential to either Mac as a main workstation, but also about the same price as each other.) I don’t view the new Mac Pro as materially different in price. Pretty much every main workstation I’ve had in the last 12 yrs has run into the low five-figures. In the grand scheme of things, it’s still cheaper than, say, premium kitchen appliances, though perhaps it doesn’t last as long! On the other hand, I’m not good enough at cooking that my kitchen appliances are tools that enable income. If I wasn’t using my Macs to make money, I doubt I’d be buying such costly machines.

While I am not a video editor, and just do some 3d modeling for fun as part of furniture design or remodeling projects, I feel this machine is warranted for my use in heavy CPU work and/or a desire for a lot of monitors. I’m not in the target GPU-compute market (yet?), but I do want a big workspace. There’s no other Mac that offers this (I get headaches from the glossy displays Apple offers, though the smaller laptops screens are ok).

So now on my desk, I have a pair of Pros, each driving a set 3×30″ displays, which matches the work I am doing right now. I haven’t had a video lock up for 12 days and counting, which has proven a huge time saver and frustration reducer, so I’m happy that I jumped on this earlier than later.

Tags: , , ,

30 Years of Mac

Posted by mitch on January 24, 2014
hardware

My parents bought a Mac 128K in 1984 (pictured below). The screen stopped working in 1993, and it hadn’t been reliable at that point for a number of years–my dad upgraded to a pair of Mac Pluses when they came out and then later he upgraded again to the Mac II.

There were lots of frustrating things about the Mac 128. Almost no software worked on it, since it was outdated almost immediately with the Mac 512. MacWrite didn’t have a spell check or much of anything else. Only one program could run at a time–no Multi-Finder. A 1mb Mac Plus was a significantly better computer, especially if you had an external hard disk that conveniently fit under the Mac–thus increasing speed, storage capacity, and the height of the monitor. Even the headphone port on the 128 was mono, if I recall correctly.

Yet there was something deeply magical about computing in that era. I spent hours goofing off in MacDraw and MS Basic. At one point, my dad had the system “maxed out” with an Apple 300 baud modem, an external floppy drive, and the ImageWriter I printer. At some point, the modem went away and we were modemless for a number of years, but one day he brought home an extra 1200 baud modem he had at his office and I spent hours sorting out the Hayes AT command set to get it to work–a lot of registers had to be set on that modem; it wasn’t just a simple matter of ATDT555-1212.

That reminds me, I need to call Comcast. It seems that they cut their pricing on 100 Mbit connections.

Tags:

The New Mac Pro

Posted by mitch on June 11, 2013
hardware

I am very excited about the new Mac Pro.

We don’t know the price yet. We don’t have full specifications. It’s not clear this form factor will ever support dual CPU packages or 8 DIMM slots (it seems it might only have 4 sockets). The total price for four 32 GB DIMMs currently runs about $10,000 from B&H. Happily, four 16 GB DIMMs is a lot less—around $1,200. 64 GB of RAM is sufficient for me for now, but I am looking to see a 128 GB option for around $1,200 within two years of owning the machine based on how my need for memory has grown in the past.

Apple does claim an I/O throughput on flash to be around 1250 MB/s, which is better than my RAID 1+0 four disk SATA SSD RAID in my Mac Pro and faster than my first-generation PCIe Accelsior OWC card.

Apple mentions up to 2×6 GB of dedicated video RAM, which significantly beats the 1-3 GB cards we’ve had on the market until now. I also am excited at the prospect of 30″ displays at 3840 x 2160. My three Apple 30″ displays are starting to show their age in terms of the backlight wear—it takes longer and longer for them to come to full brightness. I bought a Dell 30″ for my other desk, and I had to buy a calibrator to get acceptable color out of it. So I am hopeful Apple will ship a matte 30″ 4K display… (this seems rather unlikely).

Only four USB ports is a shame, but not the end of the world. Hopefully the USB 3 hub issues with Macs will be resolved soon.

And then there are the PCI slots. My Mac Pro currently has a 7950 video card in one slot, an Areca 1680x, an eSATA card that I quit using, and the PCIe Accelsior. Frankly, the new Mac Pro meets my PCI expansion needs—external chasses are cheap if I ever really need slots (just $980 for a 3 slot Magma; and Apple mentions expansion chasses are supported). What makes this possible is that Thunderbolt RAIDs are just as fast as Areca SAS configurations and generally require a lot less monkeying around. I have two Promise 18 TB Thunderbolt RAIDs connected to a Mac Mini in my basement for Time Machine backups and they have been fantastic.

So I imagine my 2013 Mac Pro will look like the following configuration:

  • Mac Pro with 8 or 12 cores, depending on price and clock options
  • 64 GB of RAM
  • 512 GB — 1 TB flash storage for boot
  • Thunderbolt ports 1-3 — with DisplayPort adapters for existing displays
  • Thunderbolt port 4 — 12-24 TB Thunderbolt RAID for home directory. I’d love to see a 12×2.5″ SSD RAID 1+0 when 1 TB SSDs get under the $400 price point.
  • 3 USB ports connected to hubs
  • 1 USB port connected to external hard disk for cloning boot drive to
  • Hopefully the audio out line has an optical connection like the AirPort Express and other optical products.

I think this will fit my needs pretty well, as long as a 128 GB RAM upgrade is cheap enough down the line. 256 GB would have been a lot nicer.

And best of all, this configuration will free up at least 4 sq ft of floor space where my Mac Pro and SAS chassis sit. If the computer is quiet enough to sit on the desk, then both the Mac Pro and the Thunderbolt RAID only take up about 1.5 sq ft of room, which would be a tremendous improvement in my office where space is a premium.

Update: I take issue with the complainers who say that the new Mac Pro will lead to a big cable mess. For me, I expect it will be about the same, but take up less floor space:

Tags: ,

Ethernet hwaddr and EEPROM storage with Arduino

Posted by mitch on October 31, 2012
hardware, projects, software

There are lots of examples of how to use the Ethernet Wiznet chips with Arduino, whether as Ethernet shields or as Ethernet Arduinos on a single board. Unfortunately, most of these examples hard-code the hardware (MAC) address, which can make things painful if you’re building more than one device and running them on the same network.

The code snippet below is a more convenient approach. You can setup a prefix (DEADBEEF in the example below) for the hardware address and the last two bytes are set randomly on first boot. The hardware address is stored in EEPROM (7 bytes are needed, 1 for a flag indicating that the next 6 bytes are properly populated).

The bytes->String conversion below is a bit ugly but I didn’t think I wanted the overhead of sprint in this. It is probably not worth the trade off. (0x30 is ‘0’ and 0x39 is ‘9’. Adding 0x07 skips over some ASCII characters to ‘A’.)

Some serious caveats: There’s only two bytes of randomness here. You might want more. Ideally you would have a manufacturing process, but if you’re just building six devices, who cares? Clearly you would never use this approach in a production environment, but it’s easier than changing the firmware for every device in a hobby environment. You could also use a separate program to write the EEPROM hardware address and keep this “manufacturing junk” out of your main firmware. These issues aside, my main requirement is convenience: I want to be able to burn a single image onto a new board and be up and running immediately without having to remember other steps. Convenience influences repeatability.

#include <Ethernet.h>
#include <EEPROM.h>

// This is a template address; the last two bytes will be randomly
// generated on the first boot and filled in.  On later boots, the
// bytes are pulled from EEPROM.
byte NETWORK_HW_ADDRESS[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00};
String NETWORK_HW_ADDRESS_STRING = "ERROR_NOT_FILLED_IN";

// These are commented out so that this code will not compile
// without the reader modifying these lines.  If you are using
// EEPROM code in your program already, you need to put the
// network address somewhere that doesn't collide with existing use.
//#define EEPROM_INIT_FLAG_ADDR 0
//#define EEPROM_HWADDR_START_ADDR 1

// Call this from your setup routine (see below)
void
initEthernetHardwareAddress() {
    int eeprom_flag = EEPROM.read(EEPROM_INIT_FLAG_ADDR);  
    int i;
    Serial.print("EEPROM flag is " + String(eeprom_flag));
    
    if (eeprom_flag != 0xCC) {
        NETWORK_HW_ADDRESS[4] = random(255);
        NETWORK_HW_ADDRESS[5] = random(255);  
        
        // write it out.
        Serial.println("Writing generated hwaddr to EEPROM...");
        for (i = 0; i < 6; i++) {
            EEPROM.write(EEPROM_HWADDR_START_ADDR + i + 1,
                         NETWORK_HW_ADDRESS[i]);
        }

        EEPROM.write(EEPROM_INIT_FLAG_ADDR, 0xCC);
    } else {
        Serial.print("Reading network hwaddr from EEPROM...");
        for (i = 0; i < 6; i++) {
            NETWORK_HW_ADDRESS[i] =
                EEPROM.read(EEPROM_HWADDR_START_ADDR + i + 1);
        }        
    }
    
    char hw_string[13];
    hw_string[12] = '\0';
    for (i = 0; i < 6; i++) {
        int j = i * 2;
        
        int the_byte    = NETWORK_HW_ADDRESS[i];
        int first_part  = (the_byte & 0xf0) >> 4;
        int second_part = (the_byte & 0x0f);
        
        first_part  += 0x30;
        second_part += 0x30;
        
        if (first_part > 0x39) {
            first_part += 0x07;
        }
        
        if (second_part > 0x39) {
            second_part += 0x07;
        }
        
        hw_string[j] = first_part;
        hw_string[j + 1] = second_part;
        
    }

    NETWORK_HW_ADDRESS_STRING = String(hw_string);

    Serial.println("NETWORK_ADDR = " + NETWORK_HW_ADDRESS_STRING);
}

void
setup() {
    // first call the usual Serial.begin and so forth...

    // setup the Ethernet hwaddr before you start using networking
    initEthernetHardwareAddress();

    int dhcp_worked = Ethernet.begin(NETWORK_HW_ADDRESS);

    // ...
}

iPad (2012) vs iPad 2 and iPad 1

Posted by mitch on March 16, 2012
hardware

Summary of my thoughts so far:

  • The screen is amazing.
  • The 2012 iPad is not noticeably thicker than the iPad 2 and is still thinner than the original iPad.
  • The 2012 iPad feels heavier. It’s not technically significantly heavier, but that weight in your hand for an extended period feels heavier. It also feels like the weight is distributed differently, though I am not sure that this is true.

Some pictures:

iPad 2 (left) and new iPad on right

iPad 2, iPad (2012), iPad 1. All screens on 50% brightness.

iPad 2012 (left) and iPad 1 (right) thickness

More pictures here.

Tags: ,

Apple IIc Compact Serial Console

Posted by mitch on March 05, 2012
hardware

About four and a half years ago, Paul Weinstein wrote up a blog post on setting up a serial console with an Apple IIc. I finally got around to duplicating his efforts this weekend after I got sniped on an eBay auction for a Lantronix serial concentrator.

I’m not going to duplicate Paul’s excellent write-up here, but I will fill in a few areas that I stumbled over.

In order to do this, you’ll need:

  • An Apple II with a serial port. The IIc has two serial ports built-in.
  • At least 3 floppy disks.
  • A serial cable to connect your Apple II to some other computer. I bought my cable and a pack of blank disks from the excellent fellow over at RetroFloppy. You could also make a cable, but in the case of the IIc, I didn’t have any DIN-5 connectors, so it was easier just to buy someone else’s properly-made cable.
  • The Apple Disk Transfer host program and disk image to move to the Apple II. You’ll dump a copy of the Apple II side software onto a floppy disk.
  • The Modem.MGR program disk images, which are available as a .zip file download here. You’ll need two floppy disks to write these onto; one is a configuration tool for the other disk, which contains the application.

I had a lot of issues with higher baud rates; I ended up settling on 9600 baud for most things.

The other gotcha I ran into was forgetting to format floppies with ADT prior to moving the disk image over. It sounds semi-obvious, but two things got in my way. One is that the Apple II reports “I/O error” when reading an unformatted floppy; the scope of what the Apple II considers an I/O error and what I consider an I/O error are different. Also, when this happens, the Apple II disk drive makes a horrible noise that “sounds like” a serious I/O error. So, when ADT starts up on the Apple II, be sure to configure the settings you want (baud rate) and format yourself some floppies. Both of these are in the “splash screen” when the program launches.

I pushed all the Apple II images over with the ADT Pro program on my Mac and a Keyspan serial adapter. Paul’s post mentions using A2V2 that comes with Virtual ][. I could never get A2V2 to work but I was successful with ADT.

Moving a floppy disk image (143K) takes a few minutes at 9600 baud.

I am using the Apple II with my new network management box. I currently have an old Athlon with 1gb of RAM doing this today, and I want to get rid of both the physical and electrical footprint of that machine. One of my previous companies I started was building network security hardware and we built prototypes using boards from PCEngines, among others. The new system I am using for my network manager is a newer board from PCEngines called the Alix2d13. It has 256 MB of RAM, a header for a second serial port (which I plan to add for X10 control), and I’ve added a 16 GB CF card to mine. I’m running Ubuntu 10 on it, which is a happy change from the bad old days of setting up an environment (we were using OpenWRT buildroot… it’s not bad, but it’s a pain when using uclibc, because you’ll need a cross compile environment. Having the space and RAM for the full glibc, python, etc. is wonderful). But this is a topic for another post!

There are more pictures here.

By the way, though I knew how to get around them, there are at least two other small things that might frustrate folks trying to do this themselves. One was that I had to manually start the getty on the Linux machine (something like /sbin/getty -8 9600 -L ttyS0 gets you there). However, it seems that the getty is properly restarting from init scripts on reboot, so I’m not sure why the getty had gone away the first time. The other is that a TERM=linux in your environment with Modem.MGR will mess up a lot of paging apps, such as top or lynx. You can fix this with export TERM=vt100 or similar.

Tags: , ,