My 3 Year Bookcase Project

Posted by mitch on May 20, 2015
home, productivity

Back in 2011 over the Thanksgiving break, I was playing with learning how to do things in SketchUp and drew a 3d model of a bookcase idea I had for my office. My office is in the “1/2 story” (the third floor) of my house, which means low ceilings. In my 2008 remodel, I gutted it, rewired it, vaulted the ceiling, and so on.

About 16 months later in early 2013, I drew this picture and sent it to my architect, Carl Oldenburg:

drawing

I have a lot of heavy books and wanted short spans to avoid bowing. Carl whipped up this awesome SketchUp rendering:

Haile Bookcase 2013-03-09a

Who could say no to building that? Inspired by Carl’s skills, I spent some time practicing and playing with ideas. I really wanted to know what this was going to be like:

office-render-1.png

office-render-2.png

After various distractions, we had the design finalized by December 2013:

Screenshot 2015-05-20 23.44.05

In early 2014, I got in touch with Aaron Honore, who is the most serious, hardcore, and awesome cabinetry carpenter I’ve known (and I’ve known more than one). Aaron was booked for 6 months, but I was willing to wait.

Construction finally happened in September 2014. I worked out of my workshop during this time:

IMG_9245

In 2008, before moving into the house, many rooms were gutted, the house was rewired, etc–this is what the front wall of office looked like about 4 months after moving in:

2888550029_8c0ddb52a4_b

The below picture is what it looked like by the time Aaron was done with it. I think the install took about 2 weeks, I don’t really remember–certainly Aaron took his time and made it perfect:

IMG_6217

For such a small project, it was still quite an outlay of time and a bit of stress. But having had the bookcase now for 8 months, I have no regrets. I certainly took my time and thought it through in great detail. There’s a built-in stereo section that connects an amp to the old speaker wire drops I put in during the 2008 remodel, LED lights under the eaves and the wall lights in the ‘A’ are wonderful.

My house is small. I highly recommend built-ins for small living. You can use every bit of space, and there’s no gap between the storage and the wall, which in some cases, saved me 2-3″. By customizing the depth of built-ins to narrower-than-usual in some cases (my living room has a 10″ deep bookcase that is 14 ft long), I’ve saved an effective 5″ of space in a room. If a room is 12 ft across, that’s significant.

What’s the point of this post? Beats me. “Take your time and do it right,” perhaps.

IMG_0538

IMG_0940

Update: I realized after posting this that I didn’t mention some of the non-obvious features of the bookcase. Sure, you can tell from the photos there are lights and doors. For anyone thinking about doing this, here’s a few things I did that I really like:

1. The deep shelves under the eaves have glass shelf insets to let in light to the back of the lower shelf. I’ve doubled up books on the bottom shelf, and this lets me see what’s back there if my eyes are aligned with the roof angle. The light spilling out above the books below makes the space feel more open that it would if it was dark:

IMG_1026

IMG_1029

2. The speaker posts, Ethernet ports, and power are in the back of the lower shelves where I thought I might want audio equipment. I also ran a 50 ft TOSlink in the bookcase from one end to the other, just in case I ever wanted it. One thing I did not consider was how difficult it would be to do the wiring because the shelf is fixed and only 10″ high. Having the removable glass panels turned out to be quite handy for that.

IMG_1032

IMG_1033

3. The light switch for the eave LEDs and the ‘A’ lights is hidden behind one of the shaker panels. It’s a double switch in a 1-gang box.

IMG_1036

Tags: ,

Your Data Center Will Be Smaller Than Your Childhood Bedroom

Posted by mitch on May 19, 2015
business, hardware, software

I saw a tweet from Chris Mellor referencing a Gartner conclusion that mid-market IT will go to the cloud:

Today, Storage Newsletter‘s headline quotes an IDC report that personal and low-end storage sales (less than 12 bays) have declined 6.4% y/y. Some dimensions of the business sank 20% y/y.

What happened in the last year? Do people create less data than they did a year ago? Isn’t data storage growing tremendously?

What is changing is where people create and keep their data. As applications move to the cloud, the data goes with it. From Salesforce to consumer photos, from corporate email to songs, all of this stuff is in someone else’s data center.

I have about 100 TB of disks in my house across six fast hardware RAIDs, but all of my critical working set lives in the cloud. The cloud pricing for large amounts of data (e.g., 1 TB) is so cheap that it’s free or bundled (Office 365, Flickr). Dropbox stands alone as an outlier to a priced service and it’s not that expensive–certainly I cannot buy a 1 TB drive and operate it for 1 year at the price point that Dropbox offers.

Generally, IT vendors fail to deliver on simplicity; it’s not in their vocabulary. I’ve been in those meetings–hundreds of them, actually–where engineers want to offer every option for the customer and for some reason (lack of vision?) the product manager lets it happen. The problem with these meetings is that everyone in them usually forgets that while the product is the most important thing in the lives of the folks creating the products, the customers have other things on their minds.

So we end up with these overly complex IT products that are impossible to use. Quick, how do you set up Postgres database backups with Tivoli? I have no idea but I know it will take a few hours to figure it out (if I am lucky). The brilliance of Amazon’s cloud user interface is that (1) the details are hidden and (2) the user is presented with just the critical options. Do you want to back up this database? Sure! Great, when? Hey, you know, I don’t really care. Just keep the backups for 30 days.

aws-screenshot

One of the most powerful things about AWS is that significant infrastructure is under a single pane of glass. This has been the Holy Grail of IT but never realized. OpenView, System Center, vCenter, TSM–everyone wants to do it, but few organizations pull it off, likely due to a mix of political, technical, and economical reasons.

The best part of Gmail going down is that it’s not my problem to bring it back online. Remember when you worked at a place that ran Exchange and the guy in charge of Exchange was always on edge? The only reason that guy is on edge now is that he is waiting for a call to see if he got the job at a place that has switched to Gmail.

The data center of the future for most mid-market companies is a single rack consisting of network connectivity, security devices, and WAN acceleration devices. No servers or standalone storage–with applications in the cloud, the only thing needed locally is data caching to augment the WAN overhead and maybe provide short-circuit data movement among local peers. This single rack will fit into a closet.

IT will still exist; these cloud applications will still need support, maintenance, and integration–and the network issues will be as challenging as ever.

But anyone who is building IT products for on-site installation is facing a significant headwind if you’re not enabling the cloud.

Tags:

Linux for the Young Computer Scientist

Posted by mitch on April 07, 2015
career, education, software

So you’re about to graduate from college and while looking for a job, and someone expresses surprise when you confess that you’re not well-versed at Linux. Uh oh.

Everyone who expects to work in computing should know some basics about Linux. So much of the world runs Linux these days–phones, thermostats, TVs, cars.

Here’s a list of tasks that the young computer scientist should be able to do with Linux. The goal isn’t for you to be able to get a job as a “sysadmin” but to have a general familiarity with enough different things that you can solve real world problems with a Linux system. Of course, much of this applies to Mac OS X, too.

  1. Install CentOS or Ubuntu into a virtual machine on your Windows or Mac desktop/laptop. Open a Terminal window.
  2. You’ll probably get most of your help through Google searching, but on the command line, you can get help with specific commands by using the man command. E.g., man ls
  3. Basic file navigation: ls, cd, pwd, pushd, popd, dirs, df, du, mv. Be careful with rm.
  4. Basic editing with vim (open a file, save it, close it without saving, edit it, copy/paste with yank, jump to a specific line number, delete a word, delete a line, replace a letter.) (You can use nano while you’re coming up to speed on vim.)
  5. Use grep, less, cat, tail, head, diff commands. Use with pipes. Use of tail -f, less +F, tail -10, head -5 (or other numbers) is handy.
  6. tar and gzip to create and expand archives of files.
  7. Use sed, awk — replace the contents of a file, print the column of a file.
  8. Command-line git commands to checkout, edit files, commit, and push back to a remote repository (e.g., Github).
  9. Basic process navigation: ps, top, kill, fg, bg, jobs, pstree, Ctrl-Z, Ctrl-C.
  10. Unix permissions: chmod, chgrp, useradd, sudo, su; what do 777, a+rw, u+r mean, how to read the left column of ls -l / output.
  11. Simple bash scripts: Write a loop to grep a file for certain output, set command aliases
  12. Compile a simple C program with gcc. Use gdb to set breakpoints, view variables in a C program being debugged (where, bt, frame, p).
  13. Use tcpdump to watch HTTP traffic to a certain host.
  14. Understand /etc/rc.d and /etc/init.d scripts
  15. A basic understanding of /etc/rc.sysinit
  16. Attach a new disk and format it with fdisk or parted and mkfs.ext4. Run fsck. mount it. Check it with df.
  17. Know how to disable selinux and iptables for debugging. (service, chkconfig)
  18. How to use the route, ifconfig, arp, ping, traceroute, dig, nslookup commands.
  19. Write an iptables rule to forward a low number port (e.g, 80) to a high number port (e.g, 5000). Why would someone want to do this?
  20. A cursory understanding of the filesystem layout — what’s in /etc, /bin, /usr, /var, etc.
  21. A cursory understanding of what’s in /proc.
  22. Configure and use SSH keys for automatic login to another host.
  23. Forward a GUI window over SSH with X11
  24. Reboot and halt the machine safely (shutdown -h now, reboot, halt -p, init, etc commands)
  25. yum and apt-* commands (CentOS and Ubuntu, respectively)
  26. Modify boot options in grub to boot single user, to boot to a bash shell

For extra credit:

  1. The find command is a complicated beast, but simple to get started with.
  2. Copy files over SSH with scp.
  3. The dd command is useful for dealing with a variety of tasks, such as grabbing an image of a disk, getting random data from /dev/urandom, or wiping out a disk, and so on. Also be aware of the special files /dev/zero and /dev/null.
  4. Figure out how to recover a forgotten root password.
  5. Disable X11 and be able to do these tasks without the GUI.
  6. Do the same tasks above on a FreeBSD machine.
  7. Without the GUI, configure the machine to use a static IP address instead of DHCP.
  8. Use screen to create multiple sessions. Logout and re-attach to an existing screen session.
  9. Write a simple Makefile for a group of C or C++ files.
  10. What does chmod +s do? Other special bits.
  11. netstat, ncat, ntop.
  12. ldd, strings, nm, addr2line, objdump
  13. Grep with regular expressions
  14. What’s in /etc/fstab?
  15. history, !<number>, !!, !$, Ctrl-R

Books to peruse:

  1. Unix Power Tools
  2. sed & awk
  3. bash Cookbook
  4. Learning Python Every computing professional should know a simple scripting language that ties to the OS for more complex scripts than are rational than bash; python is an excellent place to start.
  5. Advanced Programming in the UNIX Environment, 3rd Edition (be sure to get the latest edition)
  6. If you’re interested in networking, be sure to read TCP/IP Illustrated, Volume 1: The Protocols (2nd Edition)
  7. You probably took an OS class. While Tanenbaum and Silberschatz write great books, if you want to know Linux internals better, Rubini’s device driver book is an excellent read. There is a 4th edition coming later this year. Linux Device Drivers, 3rd Edition

Tags: ,

The New Mac Pro @ 8 months

Posted by mitch on August 03, 2014
hardware

I ordered my 2013 Mac Pro the day they went up for sale, even though I had an early morning flight that day. To recap, I bought a 6 core with D500, 1 TB SSD, and upgraded to 64 GB of OWC RAM. I upgraded my old 8 TB Areca RAID to 24 TB, bought an OWC Thunderbolt PCI chassis, and moved over my (old) Areca 1680x card. The OWC chassis is loud, so I also bought the 10 meter Thunderbolt cable and put the adapter box and disks in my office closet.

I had been running 3×30″ Apple displays with the cable mess that comes with the DisplayPort->DVI adapters, but recently switched out the Apple displays for my HP ZR30ws. Frankly, the HPs have a better picture, likely just due to crisper, more even lighting as a result of being 6 months old instead of 6 yrs old, but best of all, they require no adapters.

I sold my 2012 12-core Mac on Craigslist.

The highlights of the new Mac Pro are the lowered energy usage, the reclaimed physical space, and the huge reduction of cable mess. It’s disappointing going to 128 GB of RAM comes at a huge memory speed hit in the new box, but I can live with it (hoping that something better will be available by the time I need more than 64 GB).

The new Mac has only been off for about 2 days since I bought it, due to construction in my office. It’s been solid. I’m happy with the upgrade.

(I didn’t mention performance! It’s fast. The old box was fast, too. Is this one faster? Go look at my other post. I am spending most of my time crunching numbers in C++ on Linux this year, and it’s been great–especially with the 3.5 ghz single thread vs 2.4 ghz for the old 12-core–but 64 GB has been an issue for some of the calculations I am doing. But not a show stopper yet.)

The only downside that has bitten me is that there’s no locking mechanism for Thunderbolt cables–so if one falls out, and your home directory is on that Thunderbolt device (mine is), it’s very unfortunate. I’ve “solved” this with zip ties for now.

Mac Pro

Tags:

Email Introductions

Posted by mitch on August 02, 2014
business

From time to time, someone asks me to facilitate an introduction. Sometimes it’s to someone specific (“Mitch, do you know Bob?”) and sometimes it’s vague (“I’d like to meet people with problem X” or “who do activity X”). If I am able, I’m happy to help, as I’ve been fortunate to (and continue to) benefit from others helping me with this kind of thing.

A few thoughts on this:

  1. Send the person you are asking for an introduction an email, not a LinkedIn message. Depending on the person, you might call them too.
  2. Give the person a paragraph they can copy and paste or edit. Why are you wanting the introduction? If it’s to someone specific, why specifically them? Don’t make your introducer create copy from scratch.
  3. When/if the introduction happens, move the introducer to bcc right away. If the other party moves the introducer to bcc, don’t re-add the person!
  4. Say thanks. Especially if someone introduces you to multiple people in a get-go. Sometimes I introduce folks to half a dozen customer or partners and never hear any follow up. Was it useful? Were the introductions crap and I wasted everyone’s time? I have no idea.
  5. If you get connected with someone and they stop interacting, it might be ok to query the introducer, but don’t be surprised if they pass on re-engaging with the person of interest.

Related: If I introduce you to someone, I will often ping that person and ask if they are interested in an introduction before I send the first email with both of you. The only time I may not is when I am pinging a vendor with a potential new customer. Related: It drives me crazy when someone introduces me to someone without asking, especially if it’s not clear why in the email. I rarely reply to these emails.

Also related: Assume nothing about geography. I always cringe when one of the replies says, “Thanks for the intro — Hey Bob, should we get lunch?” when the two folks are thousands of miles apart. Not everyone lives in (y)our city and if I am creating the copy from scratch, I may not include geography information.

There’s probably more to say about this.

Conway’s Law and Your Source Tree

Posted by mitch on February 05, 2014
software

In the last post, I mentioned Conway’s Law:

organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations.

Dr. Conway was referring to people in his article–but what if we substitute “organization” with your product’s source tree and “the communication structures” to how functions and data structures interact? Let’s talk more about Conway’s Law in the context of source tree layout.

Many products of moderate complexity involve multiple moving parts. Maybe a company has a cloud service (back end) and a Web UI (front end). Or a back end and a mobile front end. Or a daemon, some instrumentation scripts, and a CLI. Or a firmware and a cloud service.

I’ve had my hands in a number of companies at various depths of “hands in.” Those who lay out a source tree that fully acknowledges the complexity of the product as early as possible tend to be the ones who win. Often, a company is started to build a core product–such as an ability to move data–and the user interface, the start-up scripts, the “stuff” that makes the algorithm no longer a student project but a product worth many millions of dollars–is an afterthought. That’s fine until someone creates a source tree that looks like this:

trunk/
	cool_product/
		main.c
		error.c
		util.c
		network.c
	stuff/
		boot.c
		cli.c
		gui.c

What’s wrong here? Presumably, some of the code in util.c could be used in other places. Maybe some of the functions in error.c would be handy to abstract out as well. An arrangement like this in which the cool_product is a large monolithic app likely means it’s going to be difficult to test any of the parts inside of it; likely modules and layering are not respected in a large monolithic app. (Note that I am not saying it’s impossible to get this right, but I am saying it’s unlikely that tired programmers will keep the philosophy in mind, day in and day out.)

A slightly different organization that introduces a library might look as follows:

trunk/
	lib/
		util.c
		error.c
		network.c
		tests/
			Unit tests for the lib/ stuff
	prod/
		cool_product/
			main.c
		gui/
		cli/
	tools/
		Build scripts or related stuff required,
		code generation, etc.
	platform/
		boot.c

As a side effect, we can also improve testing of the core code, thus improving reliability and regression detection. Ideally, the cool_product is a small amount of code outside of libraries that can be unit tested independently.

More than once I’ve heard the excuse, “We don’t have time to do this right with the current schedule.”

“I don’t have time for this” means “This isn’t important to me.” When you say, “I don’t have time to clean up the garage,” we all know what you really mean.

I was incredibly frustrated working with a group who “didn’t have time” to do anything right. Years later, that company continues to ship buggy products that could have been significantly less buggy. A few weeks of investment at the beginning could have avoided millions of dollars of expense and numerous poor reviews from customers due to the shoddy product quality. And it all comes back to how hard (or easy) it is to use the existing code, i.e., the communication structure of the code.

If you don’t have time to get it right now, when will you have time to go back and do it right later?

Getting it right takes some time. But getting it wrong always takes longer.

Teams with poor source tree layout often end up copying and pasting code. Sometimes a LOT of code. Whole files. Dozens of files. And as soon as you do that, you’re done. Someone fixes a bug in one place and forgets to fix it in another–over time, the files diverge.

If you’re taking money from investors and have a screwed up the source tree layout, there are two ethical options:

  1. Fix it. A week or two now will be significantly cheaper than months of pain and glaring customer issues when you ship.
  2. Give the money back to the investors.

If you’re reading this and shaking your head because you can’t believe people paint themselves into a corner with their source tree layouts, I envy you! But if you’re reading this and trying to pretend you don’t face a similar position with your product, it might be time to stop hacking and start engineering by opening up the communication paths where they should be open and locking down the isolation and encapsulation where they should not. This holds true for any language and for any type of product.

Tags: , , ,

Your customers can tell if your team gets along

Posted by mitch on February 04, 2014
business, products

In 1968, Dr. Melvin E. Conway published an article called, “How Do Committees Invent?”

In this paper, buried towards the end, is the following insight:

organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations.

Thinking back on my product experiences, this has been the case every time. The cracks in products show up where teams didn’t talk to each other, where two people didn’t get along, or where someone wasn’t willing to pick up the phone and call someone else. Features or modules that integrated well and worked smoothly reflect where two or more people worked well together. In cases where one person went off by himself and re-invented the wheel, sometimes even large core parts of a product, led to internal difficulties and those internal difficulties turned into product difficulties when the product shipped.

As an engineer, every time you don’t pick up the phone to call a colleague about an integration point, you’re making life harder on your customer. As a manager, every time you don’t deal with someone not communicating, you’re making life harder on your customer. Meanwhile your competition who play well together are building beautiful products that flow.

The communication successes and failures of an organization are independent of the organization size. It’s fashionable to say that small teams work better than large organizations (37signals vs Microsoft), but in fact, a small team can be incredibly dysfunctional, just as a large organization can work well (many start-ups vs Apple).

Of course, the scope of “systems” goes beyond products. IT deployments–if your VPN guy and your Exchange guy don’t like each other, how many times do you have to login to different computers? Marketing strategies–700 folks clicked on an emailed link, but did those people have a good experience on the landing page? Sales operations–much time was invested in segmenting and building custom collateral but were those materials used or ad hoc assembled in the field? Manufacturing–sure, everyone signed off on the Micron chips, but “someone” decided to build half the boards with Hynix and didn’t tell anyone? Support–Is your support experience congruent with the product, or is it outsourced with its own login, and the support folks have their own culture?

A team that doesn’t communicate openly, frequently, and freely is expensive to operate and builds lower quality products, end-to-end.

Tags: ,

Scribbles on the New Mac Pro

Posted by mitch on January 26, 2014
hardware

A significant number of folks have asked about my thoughts on the new Mac Pro… so here we go. I promise not to tell you the same nonsense you have already read everywhere else (lighted ports, etc.).

Some background: I bought an 8-core 2008 Mac Pro on the day they were available for pre-order. It was my main workstation for years, until September 2012, when the speed and RAM ceiling became painful enough to upgrade to the “2012” Mac Pro, a 12 core 2.4 GHz machine. Clock for clock, that upgrade yielded compute performance roughly double the 2008 Mac Pro.

I wasn’t sure what to expect with that upgrade, nor was I sure what to expect with the new 2013 Mac Pro. Because of price, I elected to try a 6-core machine with the D500 video, 1 TB flash, and 64 GB of OWC RAM.

I recently ran some performance tests to see how things are going with the types of computing I do. One test is a unit test of some code I am writing. The code talks to several VMs on a third Dell VMware ESXi box and spends most of its time in select() loops. There was almost no performance difference between the old and new Macs–about 3%, which isn’t surprising.

However, I have some code that runs on local disk and does heavier CPU work. One of the pieces of code shoves a lot of data through a commercial database package inside of a VM. The VM is configured with 8 cores and 16 GB of RAM on both machines. We’ll call this Test A.

Another test does extensive CPU calculations on a multi-gigabyte dataset. The dataset is read once, computations are done and correlated. This runs on native hardware and not inside of a VM. We’ll call this Test B.

old Mac Pro1 new Mac Pro2 Retina 13″ MacBook Pro3
Test A: 65.6 seconds 38.1 seconds N/A (not enough RAM)
Test B: 82.3 seconds 52.9 seconds 67.8 seconds

1 2012 Mac Pro, 12-core 2.4 GHz, 64 GB of RAM, OWC PCIe flash
2 2013 Mac Pro, 6-core 3.5 GHz, 64 GB of RAM, Apple flash
3 2013 Retina MacBook Pro 13″, 2-core 3 GHz i7, 8 GB of RAM, Apple flash

As you can see, the new Mac does the same work in about 40% less time. The CPU work here is in the range of 1-3 cores; it doesn’t scale up to use all the available cores. To keep the tests as fair as possible, the old Mac Pro is booting from a 4-SSD RAID 0+1 and the test data lived on a OWC PCIe flash card. None of these utilize the GPUs of the old or new Macs in any fashion, nor is the code particularly optimized one way or the other. I ran the tests 3 times per machine and flushed the buffer caches before each run.

Does the Mac feel faster day to day? Maybe. In applications like Aperture, where I have 30,000 photos, scrolling and manipulation “seems” a heck of a lot better. (For reference, the old Mac has the Sapphire 3 GB 7950 Mac card. I don’t have an original Radeon 5770 to test with, having sold it.)

The cable mess behind the new Mac is the same as the old Mac. In fact, it’s really Apple’s active DVI adapters for my old Apple monitors that contribute to most of the cable mess. Once the Apple monitors start to die, that mess will go away, but until then I see little reason to upgrade.

The physical space of the new Mac pro is a significant advantage. The old Pro uses 4 sq ft of floor space w/ its external disk array. The new Pro by itself actually consumes a footprint smaller than a Mac Mini (see photo at end of this post)!

The fan is quiet, even under heavy CPU load. The top surface seems to range from 110 F — 130 F; the old Mac has a surface exhaust range from 95 — 99 F at the time I measured it. So it’s hotter to the touch, and indeed the sides of the chassis range from 91 F at the very bottom to about 96 F on average. For reference, the top of my closed Retina MacBook at the time I’m writing this is about 90 F and the metal surface of the 30″ Cinema display runs around 88 F to 90 F in my measurements (all measured with an IR non-contact thermometer).

Because there is no “front” of the new Mac Pro, you can turn it at any angle that reduces cable mess without feeling like you’ve got it out of alignment with, say, the edge of a desk. This turns out to be useful if you’re a bit particular about such things.

On storage expansion, there’s been a lot of concern about the lack of putting drives into the new Pro. Frankly, I ran my 2008 machine without any internal disks for years, instead using an Areca 1680x SAS RAID. I’m glad to see this change. There’s lots of consumer-level RAIDs out there under $1000, but I’ve given up on using them–performance is poor and integrity is often questionable.

I am backing up to a pair of 18 TB Thunderbolt Pegasus systems connected to a Mini in my basement, and bought an Areca ARC-8050 Thunderbolt RAID 8-Bay enclosure and put in 24 TB of disks for the new Pro. Sadly, while it’s fine in a closet or basement, it turns out to be too loud to sit on a desk, so I bit the bullet and ordered a 10 meter Thunderbolt cable. I haven’t received the cable yet, so I haven’t moved my data off my Areca SAS RAID in my old Pro yet. But once that is done, I expect to stop using the old 8 TB SAS RAID and just use the new RAID. These are expensive storage options, but the cheap stuff is even more expensive when it fails.

So, should you buy the new Mac Pro?

I don’t know.

For me, buying this Pro was never about upgrading from my old Pro, but rather upgrading my second workstation–a maxed out 2012 Mac Mini that struggled to drive 30″ displays and crashed regularly while doing so (it’s stable with smaller displays, but in the sample size of four or five Minis I’ve had over the years, none of them could reliably drive a 30″–Apple should really not pretend that they can). In the tests above, I’ve ignored the 900 MHz clock difference, but clearly that contributes to the performance for these kinds of tests.

What about price? This new Mac Pro ran me about $6100 with tax, shipping, and the OWC RAM upgrade. The old Mac Pro cost about $6300 for the system, PCIe flash, SSDs, brackets, video card upgrade, and OWC RAM upgrade. (The disk systems are essential to either Mac as a main workstation, but also about the same price as each other.) I don’t view the new Mac Pro as materially different in price. Pretty much every main workstation I’ve had in the last 12 yrs has run into the low five-figures. In the grand scheme of things, it’s still cheaper than, say, premium kitchen appliances, though perhaps it doesn’t last as long! On the other hand, I’m not good enough at cooking that my kitchen appliances are tools that enable income. If I wasn’t using my Macs to make money, I doubt I’d be buying such costly machines.

While I am not a video editor, and just do some 3d modeling for fun as part of furniture design or remodeling projects, I feel this machine is warranted for my use in heavy CPU work and/or a desire for a lot of monitors. I’m not in the target GPU-compute market (yet?), but I do want a big workspace. There’s no other Mac that offers this (I get headaches from the glossy displays Apple offers, though the smaller laptops screens are ok).

So now on my desk, I have a pair of Pros, each driving a set 3×30″ displays, which matches the work I am doing right now. I haven’t had a video lock up for 12 days and counting, which has proven a huge time saver and frustration reducer, so I’m happy that I jumped on this earlier than later.

Tags: , , ,

30 Years of Mac

Posted by mitch on January 24, 2014
hardware

My parents bought a Mac 128K in 1984 (pictured below). The screen stopped working in 1993, and it hadn’t been reliable at that point for a number of years–my dad upgraded to a pair of Mac Pluses when they came out and then later he upgraded again to the Mac II.

There were lots of frustrating things about the Mac 128. Almost no software worked on it, since it was outdated almost immediately with the Mac 512. MacWrite didn’t have a spell check or much of anything else. Only one program could run at a time–no Multi-Finder. A 1mb Mac Plus was a significantly better computer, especially if you had an external hard disk that conveniently fit under the Mac–thus increasing speed, storage capacity, and the height of the monitor. Even the headphone port on the 128 was mono, if I recall correctly.

Yet there was something deeply magical about computing in that era. I spent hours goofing off in MacDraw and MS Basic. At one point, my dad had the system “maxed out” with an Apple 300 baud modem, an external floppy drive, and the ImageWriter I printer. At some point, the modem went away and we were modemless for a number of years, but one day he brought home an extra 1200 baud modem he had at his office and I spent hours sorting out the Hayes AT command set to get it to work–a lot of registers had to be set on that modem; it wasn’t just a simple matter of ATDT555-1212.

That reminds me, I need to call Comcast. It seems that they cut their pricing on 100 Mbit connections.

Tags:

Moving AV Gear to the Basement

Posted by mitch on January 04, 2014
audio, home

When I bought my house in Boston, I gutted most of it and did extensive rewiring, including speaker wires in the living room. Recently, I had a large built-in cabinet/bookcase built for the living room and had to move some of those wires and outlets in preparation for it. Since the electricians had to come out anyway, I decided to move all my AV components into the basement. The goal was just to have the TV, speakers, and subwoofer in the living room.

There are now 5 drops down to the basement for the surround speakers. I soldered RCA keystone jacks onto one of the old speaker drops for the subwoofer–the only place I could find solderable keystone RCA jacks was, strangely enough, Radio Shack (for 57 cents each). Behind the TV, I had the electricians pull 8 new Cat6 drops and a single HDMI cable. I also had the electricians run two 15 amp dead runs that go into a 2-gang box and terminate in AC inlets (male connectors) so that the TV and sub in the living room are plugged into the same surge protection system as the basement, thus avoiding any ground loop issues, and also eliminating the need for surge protectors in the living room for this gear.

Four of the Cat6 drops terminate at the AV shelving. I planned to use 2 of these for serial and IR lines and 2 are held for spares in case of future video-over-Cat6 or other needs. The other four Cat6 lines run to the basement patch panel. Of course, some of these could also be patched back to the AV shelves if needed for uses other than Ethernet.

I’m using a cheap IR repeater from Amazon to control the components from my Harmony remote. This works fine with my Onkyo receiver, HDMI switch, Apple TV, and Roku. It doesn’t work with my Oppo bluray player–apparently there’s something different about the IR pulse Oppo uses, and I couldn’t figure out which general repeaters would work from various forum posts. Fortunately, Oppo sells their own IR repeater system for about $25, and I’ve modified it to run over Cat6 as well. This means I have two IR sensors hidden under the TV that plug into 1/8″ mono jacks in the wall using Leviton keystone modules.

The Playstation 4 and Wii use Bluetooth controllers, which work fine through the floor. Nothing fancy was needed to extend these. It turns out that the Wii sensor bar is an “IR flashlight”–the bar itself doesn’t send any data to the Wii. So I bought one with a USB connector on it so it can plug into any USB power supply. (The original Wii bar had weird 3-tooth screws and I didn’t want to tear it up.)

I also finally got around to building a 12v trigger solution for my amplifier–my 7 yr old Onkyo receiver doesn’t have a 12v trigger for the main zone, but a 10v wall wart plugged into the Onkyo does the trick, now that I’ve soldered a 1/8″ mono plug onto the end and plugged it into the Outlaw amp. (My front speakers are 4 ohm and the Onkyo would probably overheat trying to drive them.)

The final missing piece was a volume display. I missed knowing what the volume was on the receiver, the selected input, and the listening mode, so I built a simple serial device that plugs into the Onkyo’s serial port over Cat6 cables. I have a 20×2 large screen display that queries the Onkyo for status a few times a second (powered by Arduino–firmware code is here). Muting, powered off, volume, listening mode (e.g., THX, Stereo, Pure Audio…) are displayed, as well as the input source. My next step is to add a second serial interface to the display so that I can query the Oppo and show time into the disc, playing state, etc. (Many newer receivers support their serial protocols over Ethernet, albeit at a higher standby power usage, and as far as I can tell, Oppo has not opened up their Ethernet protocol, though their serial protocol is well documented.) The enclosure is a rather ugly, but works for the moment until I build something better:

Note that another option is just to buy a receiver/pre-amp that puts the volume out over HDMI. My receiver is older and leaves the HDMI signal unmolested. Most modern gear will just put the volume up on the screen, but my next processor is going to be a big purchase, and this was a lot cheaper for now.

I did make a few mistakes:

  • The quad coming off the inlets should have been a 4-gang (8 outlets).
  • I almost only had 4 Cat6 drops behind the entertainment center, mostly due to the length of Cat6 cable I had on hand. Happily my electrician went and bought another 1000 ft spool and said, “Mitch, what do you really want?”
  • I probably should have run a second HDMI cable, just in case I ever need it.
  • The 8 Cat6 cables, a coax line (in case I ever want another sub or need a coax line), and the HDMI cable all go into a 3-gang box in the living room. This is a bit tight for this many wires, especially when one of the Cat6 lines splits into two 1/8″ connectors.
  • Not really a mistake, but if you’re doing this and buying new shelving for the rack, buy shelves with wheels. I am just using an old shelf I already had, but wheels would be very handy.

If you have a small living room with a basement or closet nearby, this might be a good way to go if you don’t want to get rid of AV components. With more room to keep things organized, more air flow around the electronics, I’m really happy with how this turned out. Since the bluray player is in the basement, the DVD and blurays are now in the basement, and this has freed up ~50 linear feet of shelving upstairs. (I’ve ripped a lot of my movies, but it’s a pain and I haven’t done them all.)

And best of all, there is now a lot less crap in the living room.

Tags: , , , , ,