Linux for the Young Computer Scientist

Posted by mitch on April 07, 2015
career, education, software

So you’re about to graduate from college and while looking for a job, and someone expresses surprise when you confess that you’re not well-versed at Linux. Uh oh.

Everyone who expects to work in computing should know some basics about Linux. So much of the world runs Linux these days–phones, thermostats, TVs, cars.

Here’s a list of tasks that the young computer scientist should be able to do with Linux. The goal isn’t for you to be able to get a job as a “sysadmin” but to have a general familiarity with enough different things that you can solve real world problems with a Linux system. Of course, much of this applies to Mac OS X, too.

  1. Install CentOS or Ubuntu into a virtual machine on your Windows or Mac desktop/laptop. Open a Terminal window.
  2. You’ll probably get most of your help through Google searching, but on the command line, you can get help with specific commands by using the man command. E.g., man ls
  3. Basic file navigation: ls, cd, pwd, pushd, popd, dirs, df, du, mv. Be careful with rm.
  4. Basic editing with vim (open a file, save it, close it without saving, edit it, copy/paste with yank, jump to a specific line number, delete a word, delete a line, replace a letter.) (You can use nano while you’re coming up to speed on vim.)
  5. Use grep, less, cat, tail, head, diff commands. Use with pipes. Use of tail -f, less +F, tail -10, head -5 (or other numbers) is handy.
  6. tar and gzip to create and expand archives of files.
  7. Use sed, awk — replace the contents of a file, print the column of a file.
  8. Command-line git commands to checkout, edit files, commit, and push back to a remote repository (e.g., Github).
  9. Basic process navigation: ps, top, kill, fg, bg, jobs, pstree, Ctrl-Z, Ctrl-C.
  10. Unix permissions: chmod, chgrp, useradd, sudo, su; what do 777, a+rw, u+r mean, how to read the left column of ls -l / output.
  11. Simple bash scripts: Write a loop to grep a file for certain output, set command aliases
  12. Compile a simple C program with gcc. Use gdb to set breakpoints, view variables in a C program being debugged (where, bt, frame, p).
  13. Use tcpdump to watch HTTP traffic to a certain host.
  14. Understand /etc/rc.d and /etc/init.d scripts
  15. A basic understanding of /etc/rc.sysinit
  16. Attach a new disk and format it with fdisk or parted and mkfs.ext4. Run fsck. mount it. Check it with df.
  17. Know how to disable selinux and iptables for debugging. (service, chkconfig)
  18. How to use the route, ifconfig, arp, ping, traceroute, dig, nslookup commands.
  19. Write an iptables rule to forward a low number port (e.g, 80) to a high number port (e.g, 5000). Why would someone want to do this?
  20. A cursory understanding of the filesystem layout — what’s in /etc, /bin, /usr, /var, etc.
  21. A cursory understanding of what’s in /proc.
  22. Configure and use SSH keys for automatic login to another host.
  23. Forward a GUI window over SSH with X11
  24. Reboot and halt the machine safely (shutdown -h now, reboot, halt -p, init, etc commands)
  25. yum and apt-* commands (CentOS and Ubuntu, respectively)
  26. Modify boot options in grub to boot single user, to boot to a bash shell

For extra credit:

  1. The find command is a complicated beast, but simple to get started with.
  2. Copy files over SSH with scp.
  3. The dd command is useful for dealing with a variety of tasks, such as grabbing an image of a disk, getting random data from /dev/urandom, or wiping out a disk, and so on. Also be aware of the special files /dev/zero and /dev/null.
  4. Figure out how to recover a forgotten root password.
  5. Disable X11 and be able to do these tasks without the GUI.
  6. Do the same tasks above on a FreeBSD machine.
  7. Without the GUI, configure the machine to use a static IP address instead of DHCP.
  8. Use screen to create multiple sessions. Logout and re-attach to an existing screen session.
  9. Write a simple Makefile for a group of C or C++ files.
  10. What does chmod +s do? Other special bits.
  11. netstat, ncat, ntop.
  12. ldd, strings, nm, addr2line, objdump
  13. Grep with regular expressions
  14. What’s in /etc/fstab?
  15. history, !<number>, !!, !$, Ctrl-R

Books to peruse:

  1. Unix Power Tools
  2. sed & awk
  3. bash Cookbook
  4. Learning Python Every computing professional should know a simple scripting language that ties to the OS for more complex scripts than are rational than bash; python is an excellent place to start.
  5. Advanced Programming in the UNIX Environment, 3rd Edition (be sure to get the latest edition)
  6. If you’re interested in networking, be sure to read TCP/IP Illustrated, Volume 1: The Protocols (2nd Edition)
  7. You probably took an OS class. While Tanenbaum and Silberschatz write great books, if you want to know Linux internals better, Rubini’s device driver book is an excellent read. There is a 4th edition coming later this year. Linux Device Drivers, 3rd Edition

Tags: ,

The New Mac Pro @ 8 months

Posted by mitch on August 03, 2014
hardware

I ordered my 2013 Mac Pro the day they went up for sale, even though I had an early morning flight that day. To recap, I bought a 6 core with D500, 1 TB SSD, and upgraded to 64 GB of OWC RAM. I upgraded my old 8 TB Areca RAID to 24 TB, bought an OWC Thunderbolt PCI chassis, and moved over my (old) Areca 1680x card. The OWC chassis is loud, so I also bought the 10 meter Thunderbolt cable and put the adapter box and disks in my office closet.

I had been running 3×30″ Apple displays with the cable mess that comes with the DisplayPort->DVI adapters, but recently switched out the Apple displays for my HP ZR30ws. Frankly, the HPs have a better picture, likely just due to crisper, more even lighting as a result of being 6 months old instead of 6 yrs old, but best of all, they require no adapters.

I sold my 2012 12-core Mac on Craigslist.

The highlights of the new Mac Pro are the lowered energy usage, the reclaimed physical space, and the huge reduction of cable mess. It’s disappointing going to 128 GB of RAM comes at a huge memory speed hit in the new box, but I can live with it (hoping that something better will be available by the time I need more than 64 GB).

The new Mac has only been off for about 2 days since I bought it, due to construction in my office. It’s been solid. I’m happy with the upgrade.

(I didn’t mention performance! It’s fast. The old box was fast, too. Is this one faster? Go look at my other post. I am spending most of my time crunching numbers in C++ on Linux this year, and it’s been great–especially with the 3.5 ghz single thread vs 2.4 ghz for the old 12-core–but 64 GB has been an issue for some of the calculations I am doing. But not a show stopper yet.)

The only downside that has bitten me is that there’s no locking mechanism for Thunderbolt cables–so if one falls out, and your home directory is on that Thunderbolt device (mine is), it’s very unfortunate. I’ve “solved” this with zip ties for now.

Mac Pro

Tags:

Email Introductions

Posted by mitch on August 02, 2014
business

From time to time, someone asks me to facilitate an introduction. Sometimes it’s to someone specific (“Mitch, do you know Bob?”) and sometimes it’s vague (“I’d like to meet people with problem X” or “who do activity X”). If I am able, I’m happy to help, as I’ve been fortunate to (and continue to) benefit from others helping me with this kind of thing.

A few thoughts on this:

  1. Send the person you are asking for an introduction an email, not a LinkedIn message. Depending on the person, you might call them too.
  2. Give the person a paragraph they can copy and paste or edit. Why are you wanting the introduction? If it’s to someone specific, why specifically them? Don’t make your introducer create copy from scratch.
  3. When/if the introduction happens, move the introducer to bcc right away. If the other party moves the introducer to bcc, don’t re-add the person!
  4. Say thanks. Especially if someone introduces you to multiple people in a get-go. Sometimes I introduce folks to half a dozen customer or partners and never hear any follow up. Was it useful? Were the introductions crap and I wasted everyone’s time? I have no idea.
  5. If you get connected with someone and they stop interacting, it might be ok to query the introducer, but don’t be surprised if they pass on re-engaging with the person of interest.

Related: If I introduce you to someone, I will often ping that person and ask if they are interested in an introduction before I send the first email with both of you. The only time I may not is when I am pinging a vendor with a potential new customer. Related: It drives me crazy when someone introduces me to someone without asking, especially if it’s not clear why in the email. I rarely reply to these emails.

Also related: Assume nothing about geography. I always cringe when one of the replies says, “Thanks for the intro — Hey Bob, should we get lunch?” when the two folks are thousands of miles apart. Not everyone lives in (y)our city and if I am creating the copy from scratch, I may not include geography information.

There’s probably more to say about this.

Conway’s Law and Your Source Tree

Posted by mitch on February 05, 2014
software

In the last post, I mentioned Conway’s Law:

organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations.

Dr. Conway was referring to people in his article–but what if we substitute “organization” with your product’s source tree and “the communication structures” to how functions and data structures interact? Let’s talk more about Conway’s Law in the context of source tree layout.

Many products of moderate complexity involve multiple moving parts. Maybe a company has a cloud service (back end) and a Web UI (front end). Or a back end and a mobile front end. Or a daemon, some instrumentation scripts, and a CLI. Or a firmware and a cloud service.

I’ve had my hands in a number of companies at various depths of “hands in.” Those who lay out a source tree that fully acknowledges the complexity of the product as early as possible tend to be the ones who win. Often, a company is started to build a core product–such as an ability to move data–and the user interface, the start-up scripts, the “stuff” that makes the algorithm no longer a student project but a product worth many millions of dollars–is an afterthought. That’s fine until someone creates a source tree that looks like this:

trunk/
	cool_product/
		main.c
		error.c
		util.c
		network.c
	stuff/
		boot.c
		cli.c
		gui.c

What’s wrong here? Presumably, some of the code in util.c could be used in other places. Maybe some of the functions in error.c would be handy to abstract out as well. An arrangement like this in which the cool_product is a large monolithic app likely means it’s going to be difficult to test any of the parts inside of it; likely modules and layering are not respected in a large monolithic app. (Note that I am not saying it’s impossible to get this right, but I am saying it’s unlikely that tired programmers will keep the philosophy in mind, day in and day out.)

A slightly different organization that introduces a library might look as follows:

trunk/
	lib/
		util.c
		error.c
		network.c
		tests/
			Unit tests for the lib/ stuff
	prod/
		cool_product/
			main.c
		gui/
		cli/
	tools/
		Build scripts or related stuff required,
		code generation, etc.
	platform/
		boot.c

As a side effect, we can also improve testing of the core code, thus improving reliability and regression detection. Ideally, the cool_product is a small amount of code outside of libraries that can be unit tested independently.

More than once I’ve heard the excuse, “We don’t have time to do this right with the current schedule.”

“I don’t have time for this” means “This isn’t important to me.” When you say, “I don’t have time to clean up the garage,” we all know what you really mean.

I was incredibly frustrated working with a group who “didn’t have time” to do anything right. Years later, that company continues to ship buggy products that could have been significantly less buggy. A few weeks of investment at the beginning could have avoided millions of dollars of expense and numerous poor reviews from customers due to the shoddy product quality. And it all comes back to how hard (or easy) it is to use the existing code, i.e., the communication structure of the code.

If you don’t have time to get it right now, when will you have time to go back and do it right later?

Getting it right takes some time. But getting it wrong always takes longer.

Teams with poor source tree layout often end up copying and pasting code. Sometimes a LOT of code. Whole files. Dozens of files. And as soon as you do that, you’re done. Someone fixes a bug in one place and forgets to fix it in another–over time, the files diverge.

If you’re taking money from investors and have a screwed up the source tree layout, there are two ethical options:

  1. Fix it. A week or two now will be significantly cheaper than months of pain and glaring customer issues when you ship.
  2. Give the money back to the investors.

If you’re reading this and shaking your head because you can’t believe people paint themselves into a corner with their source tree layouts, I envy you! But if you’re reading this and trying to pretend you don’t face a similar position with your product, it might be time to stop hacking and start engineering by opening up the communication paths where they should be open and locking down the isolation and encapsulation where they should not. This holds true for any language and for any type of product.

Tags: , , ,

Your customers can tell if your team gets along

Posted by mitch on February 04, 2014
business, products

In 1968, Dr. Melvin E. Conway published an article called, “How Do Committees Invent?”

In this paper, buried towards the end, is the following insight:

organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations.

Thinking back on my product experiences, this has been the case every time. The cracks in products show up where teams didn’t talk to each other, where two people didn’t get along, or where someone wasn’t willing to pick up the phone and call someone else. Features or modules that integrated well and worked smoothly reflect where two or more people worked well together. In cases where one person went off by himself and re-invented the wheel, sometimes even large core parts of a product, led to internal difficulties and those internal difficulties turned into product difficulties when the product shipped.

As an engineer, every time you don’t pick up the phone to call a colleague about an integration point, you’re making life harder on your customer. As a manager, every time you don’t deal with someone not communicating, you’re making life harder on your customer. Meanwhile your competition who play well together are building beautiful products that flow.

The communication successes and failures of an organization are independent of the organization size. It’s fashionable to say that small teams work better than large organizations (37signals vs Microsoft), but in fact, a small team can be incredibly dysfunctional, just as a large organization can work well (many start-ups vs Apple).

Of course, the scope of “systems” goes beyond products. IT deployments–if your VPN guy and your Exchange guy don’t like each other, how many times do you have to login to different computers? Marketing strategies–700 folks clicked on an emailed link, but did those people have a good experience on the landing page? Sales operations–much time was invested in segmenting and building custom collateral but were those materials used or ad hoc assembled in the field? Manufacturing–sure, everyone signed off on the Micron chips, but “someone” decided to build half the boards with Hynix and didn’t tell anyone? Support–Is your support experience congruent with the product, or is it outsourced with its own login, and the support folks have their own culture?

A team that doesn’t communicate openly, frequently, and freely is expensive to operate and builds lower quality products, end-to-end.

Tags: ,

Scribbles on the New Mac Pro

Posted by mitch on January 26, 2014
hardware

A significant number of folks have asked about my thoughts on the new Mac Pro… so here we go. I promise not to tell you the same nonsense you have already read everywhere else (lighted ports, etc.).

Some background: I bought an 8-core 2008 Mac Pro on the day they were available for pre-order. It was my main workstation for years, until September 2012, when the speed and RAM ceiling became painful enough to upgrade to the “2012” Mac Pro, a 12 core 2.4 GHz machine. Clock for clock, that upgrade yielded compute performance roughly double the 2008 Mac Pro.

I wasn’t sure what to expect with that upgrade, nor was I sure what to expect with the new 2013 Mac Pro. Because of price, I elected to try a 6-core machine with the D500 video, 1 TB flash, and 64 GB of OWC RAM.

I recently ran some performance tests to see how things are going with the types of computing I do. One test is a unit test of some code I am writing. The code talks to several VMs on a third Dell VMware ESXi box and spends most of its time in select() loops. There was almost no performance difference between the old and new Macs–about 3%, which isn’t surprising.

However, I have some code that runs on local disk and does heavier CPU work. One of the pieces of code shoves a lot of data through a commercial database package inside of a VM. The VM is configured with 8 cores and 16 GB of RAM on both machines. We’ll call this Test A.

Another test does extensive CPU calculations on a multi-gigabyte dataset. The dataset is read once, computations are done and correlated. This runs on native hardware and not inside of a VM. We’ll call this Test B.

old Mac Pro1 new Mac Pro2 Retina 13″ MacBook Pro3
Test A: 65.6 seconds 38.1 seconds N/A (not enough RAM)
Test B: 82.3 seconds 52.9 seconds 67.8 seconds

1 2012 Mac Pro, 12-core 2.4 GHz, 64 GB of RAM, OWC PCIe flash
2 2013 Mac Pro, 6-core 3.5 GHz, 64 GB of RAM, Apple flash
3 2013 Retina MacBook Pro 13″, 2-core 3 GHz i7, 8 GB of RAM, Apple flash

As you can see, the new Mac does the same work in about 40% less time. The CPU work here is in the range of 1-3 cores; it doesn’t scale up to use all the available cores. To keep the tests as fair as possible, the old Mac Pro is booting from a 4-SSD RAID 0+1 and the test data lived on a OWC PCIe flash card. None of these utilize the GPUs of the old or new Macs in any fashion, nor is the code particularly optimized one way or the other. I ran the tests 3 times per machine and flushed the buffer caches before each run.

Does the Mac feel faster day to day? Maybe. In applications like Aperture, where I have 30,000 photos, scrolling and manipulation “seems” a heck of a lot better. (For reference, the old Mac has the Sapphire 3 GB 7950 Mac card. I don’t have an original Radeon 5770 to test with, having sold it.)

The cable mess behind the new Mac is the same as the old Mac. In fact, it’s really Apple’s active DVI adapters for my old Apple monitors that contribute to most of the cable mess. Once the Apple monitors start to die, that mess will go away, but until then I see little reason to upgrade.

The physical space of the new Mac pro is a significant advantage. The old Pro uses 4 sq ft of floor space w/ its external disk array. The new Pro by itself actually consumes a footprint smaller than a Mac Mini (see photo at end of this post)!

The fan is quiet, even under heavy CPU load. The top surface seems to range from 110 F — 130 F; the old Mac has a surface exhaust range from 95 — 99 F at the time I measured it. So it’s hotter to the touch, and indeed the sides of the chassis range from 91 F at the very bottom to about 96 F on average. For reference, the top of my closed Retina MacBook at the time I’m writing this is about 90 F and the metal surface of the 30″ Cinema display runs around 88 F to 90 F in my measurements (all measured with an IR non-contact thermometer).

Because there is no “front” of the new Mac Pro, you can turn it at any angle that reduces cable mess without feeling like you’ve got it out of alignment with, say, the edge of a desk. This turns out to be useful if you’re a bit particular about such things.

On storage expansion, there’s been a lot of concern about the lack of putting drives into the new Pro. Frankly, I ran my 2008 machine without any internal disks for years, instead using an Areca 1680x SAS RAID. I’m glad to see this change. There’s lots of consumer-level RAIDs out there under $1000, but I’ve given up on using them–performance is poor and integrity is often questionable.

I am backing up to a pair of 18 TB Thunderbolt Pegasus systems connected to a Mini in my basement, and bought an Areca ARC-8050 Thunderbolt RAID 8-Bay enclosure and put in 24 TB of disks for the new Pro. Sadly, while it’s fine in a closet or basement, it turns out to be too loud to sit on a desk, so I bit the bullet and ordered a 10 meter Thunderbolt cable. I haven’t received the cable yet, so I haven’t moved my data off my Areca SAS RAID in my old Pro yet. But once that is done, I expect to stop using the old 8 TB SAS RAID and just use the new RAID. These are expensive storage options, but the cheap stuff is even more expensive when it fails.

So, should you buy the new Mac Pro?

I don’t know.

For me, buying this Pro was never about upgrading from my old Pro, but rather upgrading my second workstation–a maxed out 2012 Mac Mini that struggled to drive 30″ displays and crashed regularly while doing so (it’s stable with smaller displays, but in the sample size of four or five Minis I’ve had over the years, none of them could reliably drive a 30″–Apple should really not pretend that they can). In the tests above, I’ve ignored the 900 MHz clock difference, but clearly that contributes to the performance for these kinds of tests.

What about price? This new Mac Pro ran me about $6100 with tax, shipping, and the OWC RAM upgrade. The old Mac Pro cost about $6300 for the system, PCIe flash, SSDs, brackets, video card upgrade, and OWC RAM upgrade. (The disk systems are essential to either Mac as a main workstation, but also about the same price as each other.) I don’t view the new Mac Pro as materially different in price. Pretty much every main workstation I’ve had in the last 12 yrs has run into the low five-figures. In the grand scheme of things, it’s still cheaper than, say, premium kitchen appliances, though perhaps it doesn’t last as long! On the other hand, I’m not good enough at cooking that my kitchen appliances are tools that enable income. If I wasn’t using my Macs to make money, I doubt I’d be buying such costly machines.

While I am not a video editor, and just do some 3d modeling for fun as part of furniture design or remodeling projects, I feel this machine is warranted for my use in heavy CPU work and/or a desire for a lot of monitors. I’m not in the target GPU-compute market (yet?), but I do want a big workspace. There’s no other Mac that offers this (I get headaches from the glossy displays Apple offers, though the smaller laptops screens are ok).

So now on my desk, I have a pair of Pros, each driving a set 3×30″ displays, which matches the work I am doing right now. I haven’t had a video lock up for 12 days and counting, which has proven a huge time saver and frustration reducer, so I’m happy that I jumped on this earlier than later.

Tags: , , ,

30 Years of Mac

Posted by mitch on January 24, 2014
hardware

My parents bought a Mac 128K in 1984 (pictured below). The screen stopped working in 1993, and it hadn’t been reliable at that point for a number of years–my dad upgraded to a pair of Mac Pluses when they came out and then later he upgraded again to the Mac II.

There were lots of frustrating things about the Mac 128. Almost no software worked on it, since it was outdated almost immediately with the Mac 512. MacWrite didn’t have a spell check or much of anything else. Only one program could run at a time–no Multi-Finder. A 1mb Mac Plus was a significantly better computer, especially if you had an external hard disk that conveniently fit under the Mac–thus increasing speed, storage capacity, and the height of the monitor. Even the headphone port on the 128 was mono, if I recall correctly.

Yet there was something deeply magical about computing in that era. I spent hours goofing off in MacDraw and MS Basic. At one point, my dad had the system “maxed out” with an Apple 300 baud modem, an external floppy drive, and the ImageWriter I printer. At some point, the modem went away and we were modemless for a number of years, but one day he brought home an extra 1200 baud modem he had at his office and I spent hours sorting out the Hayes AT command set to get it to work–a lot of registers had to be set on that modem; it wasn’t just a simple matter of ATDT555-1212.

That reminds me, I need to call Comcast. It seems that they cut their pricing on 100 Mbit connections.

Tags:

Moving AV Gear to the Basement

Posted by mitch on January 04, 2014
audio, home

When I bought my house in Boston, I gutted most of it and did extensive rewiring, including speaker wires in the living room. Recently, I had a large built-in cabinet/bookcase built for the living room and had to move some of those wires and outlets in preparation for it. Since the electricians had to come out anyway, I decided to move all my AV components into the basement. The goal was just to have the TV, speakers, and subwoofer in the living room.

There are now 5 drops down to the basement for the surround speakers. I soldered RCA keystone jacks onto one of the old speaker drops for the subwoofer–the only place I could find solderable keystone RCA jacks was, strangely enough, Radio Shack (for 57 cents each). Behind the TV, I had the electricians pull 8 new Cat6 drops and a single HDMI cable. I also had the electricians run two 15 amp dead runs that go into a 2-gang box and terminate in AC inlets (male connectors) so that the TV and sub in the living room are plugged into the same surge protection system as the basement, thus avoiding any ground loop issues, and also eliminating the need for surge protectors in the living room for this gear.

Four of the Cat6 drops terminate at the AV shelving. I planned to use 2 of these for serial and IR lines and 2 are held for spares in case of future video-over-Cat6 or other needs. The other four Cat6 lines run to the basement patch panel. Of course, some of these could also be patched back to the AV shelves if needed for uses other than Ethernet.

I’m using a cheap IR repeater from Amazon to control the components from my Harmony remote. This works fine with my Onkyo receiver, HDMI switch, Apple TV, and Roku. It doesn’t work with my Oppo bluray player–apparently there’s something different about the IR pulse Oppo uses, and I couldn’t figure out which general repeaters would work from various forum posts. Fortunately, Oppo sells their own IR repeater system for about $25, and I’ve modified it to run over Cat6 as well. This means I have two IR sensors hidden under the TV that plug into 1/8″ mono jacks in the wall using Leviton keystone modules.

The Playstation 4 and Wii use Bluetooth controllers, which work fine through the floor. Nothing fancy was needed to extend these. It turns out that the Wii sensor bar is an “IR flashlight”–the bar itself doesn’t send any data to the Wii. So I bought one with a USB connector on it so it can plug into any USB power supply. (The original Wii bar had weird 3-tooth screws and I didn’t want to tear it up.)

I also finally got around to building a 12v trigger solution for my amplifier–my 7 yr old Onkyo receiver doesn’t have a 12v trigger for the main zone, but a 10v wall wart plugged into the Onkyo does the trick, now that I’ve soldered a 1/8″ mono plug onto the end and plugged it into the Outlaw amp. (My front speakers are 4 ohm and the Onkyo would probably overheat trying to drive them.)

The final missing piece was a volume display. I missed knowing what the volume was on the receiver, the selected input, and the listening mode, so I built a simple serial device that plugs into the Onkyo’s serial port over Cat6 cables. I have a 20×2 large screen display that queries the Onkyo for status a few times a second (powered by Arduino–firmware code is here). Muting, powered off, volume, listening mode (e.g., THX, Stereo, Pure Audio…) are displayed, as well as the input source. My next step is to add a second serial interface to the display so that I can query the Oppo and show time into the disc, playing state, etc. (Many newer receivers support their serial protocols over Ethernet, albeit at a higher standby power usage, and as far as I can tell, Oppo has not opened up their Ethernet protocol, though their serial protocol is well documented.) The enclosure is a rather ugly, but works for the moment until I build something better:

Note that another option is just to buy a receiver/pre-amp that puts the volume out over HDMI. My receiver is older and leaves the HDMI signal unmolested. Most modern gear will just put the volume up on the screen, but my next processor is going to be a big purchase, and this was a lot cheaper for now.

I did make a few mistakes:

  • The quad coming off the inlets should have been a 4-gang (8 outlets).
  • I almost only had 4 Cat6 drops behind the entertainment center, mostly due to the length of Cat6 cable I had on hand. Happily my electrician went and bought another 1000 ft spool and said, “Mitch, what do you really want?”
  • I probably should have run a second HDMI cable, just in case I ever need it.
  • The 8 Cat6 cables, a coax line (in case I ever want another sub or need a coax line), and the HDMI cable all go into a 3-gang box in the living room. This is a bit tight for this many wires, especially when one of the Cat6 lines splits into two 1/8″ connectors.
  • Not really a mistake, but if you’re doing this and buying new shelving for the rack, buy shelves with wheels. I am just using an old shelf I already had, but wheels would be very handy.

If you have a small living room with a basement or closet nearby, this might be a good way to go if you don’t want to get rid of AV components. With more room to keep things organized, more air flow around the electronics, I’m really happy with how this turned out. Since the bluray player is in the basement, the DVD and blurays are now in the basement, and this has freed up ~50 linear feet of shelving upstairs. (I’ve ripped a lot of my movies, but it’s a pain and I haven’t done them all.)

And best of all, there is now a lot less crap in the living room.

Tags: , , , , ,

Why I Hate Computers

Posted by mitch on October 16, 2013
productivity

Sometimes the string never ends.

I was working on some code today; debugging a new set of functions I wrote this morning.

Off and on I’ve had issues with my development VM reaching one of the nodes, another VM, in my test environment. As I went to clear the state on that VM, the network stopped working.

I figured this was perhaps a bug in VMware Fusion 4, so I decided to upgrade to Fusion 6 Pro. I went to the VMware online store to buy it and got an error when trying to put the product into my shopping cart:

Error Number:  SIT_000002

And an error again when I tried again.

I logged into my account, which remembered that I had put Fusion 6 Pro into my shopping cart before. So I went to check out and got an error that the cart was empty.

So I tried adding it again and it worked.

Then I got an error when I put in my credit card number:

PMT_000011 : vmware, en_US, There is money to authorize, But no Aurthorize delegated were applicable

Then I found a free trial of Fusion 6 Pro and downloaded that and installed it on a test Mac Mini.

I then started trying to copy the test VM to the Mac Mini and observed a 11.5 MB/s transfer rate, which is suspiciously close to the maximum speed of 100baseT. But I have GigE. What’s going on? I checked previous network traffic stats on both machines–they had both done 70-90 MB/s activities in the last day.
Wondering if it was an AFP issue, I tried SMB and noticed the network throughput stayed at 11.5ish. Multiple streams didn’t help.

I finally found that the negotiated speed was indeed 100mbps on the Mac Pro for some reason. Forcing it to GigE caused the interface go achieve and lose carrier rapidly after a few minutes of working.

I tried to login to my switch and couldn’t remember the password, but I did eventually.

Then I wondered which port the Mac Pro was on.

After many minutes, I tracked the problem to a specific cable, not a switch port, wall port, or a port on the Mac Pro. I’m not sure why; the cable had been working fine for years.

In part of all this I discovered I have very few spare Cat-6 cables.

I logged into Monoprice to order more cables and almost got charged $58 for international shipping–I might not have noticed, except at checkout, they said they only would accept PayPal for international orders. Apparently, Monoprice had decided I lived in the UK since my last order.

Much teeth gnashing to fix my country with their store.

Order placed.

Started to write this blog post and the battery was inexplicably dead in the laptop I sat down with, had to get a charger.

And don’t even get me started on Time Machine issues today.

I still don’t know if the network will work in that VM or not. I am confident my code doesn’t work yet.

I don’t know how anyone uses a computer. They are way too complicated.

Tags:

The New Mac Pro

Posted by mitch on June 11, 2013
hardware

I am very excited about the new Mac Pro.

We don’t know the price yet. We don’t have full specifications. It’s not clear this form factor will ever support dual CPU packages or 8 DIMM slots (it seems it might only have 4 sockets). The total price for four 32 GB DIMMs currently runs about $10,000 from B&H. Happily, four 16 GB DIMMs is a lot less—around $1,200. 64 GB of RAM is sufficient for me for now, but I am looking to see a 128 GB option for around $1,200 within two years of owning the machine based on how my need for memory has grown in the past.

Apple does claim an I/O throughput on flash to be around 1250 MB/s, which is better than my RAID 1+0 four disk SATA SSD RAID in my Mac Pro and faster than my first-generation PCIe Accelsior OWC card.

Apple mentions up to 2×6 GB of dedicated video RAM, which significantly beats the 1-3 GB cards we’ve had on the market until now. I also am excited at the prospect of 30″ displays at 3840 x 2160. My three Apple 30″ displays are starting to show their age in terms of the backlight wear—it takes longer and longer for them to come to full brightness. I bought a Dell 30″ for my other desk, and I had to buy a calibrator to get acceptable color out of it. So I am hopeful Apple will ship a matte 30″ 4K display… (this seems rather unlikely).

Only four USB ports is a shame, but not the end of the world. Hopefully the USB 3 hub issues with Macs will be resolved soon.

And then there are the PCI slots. My Mac Pro currently has a 7950 video card in one slot, an Areca 1680x, an eSATA card that I quit using, and the PCIe Accelsior. Frankly, the new Mac Pro meets my PCI expansion needs—external chasses are cheap if I ever really need slots (just $980 for a 3 slot Magma; and Apple mentions expansion chasses are supported). What makes this possible is that Thunderbolt RAIDs are just as fast as Areca SAS configurations and generally require a lot less monkeying around. I have two Promise 18 TB Thunderbolt RAIDs connected to a Mac Mini in my basement for Time Machine backups and they have been fantastic.

So I imagine my 2013 Mac Pro will look like the following configuration:

  • Mac Pro with 8 or 12 cores, depending on price and clock options
  • 64 GB of RAM
  • 512 GB — 1 TB flash storage for boot
  • Thunderbolt ports 1-3 — with DisplayPort adapters for existing displays
  • Thunderbolt port 4 — 12-24 TB Thunderbolt RAID for home directory. I’d love to see a 12×2.5″ SSD RAID 1+0 when 1 TB SSDs get under the $400 price point.
  • 3 USB ports connected to hubs
  • 1 USB port connected to external hard disk for cloning boot drive to
  • Hopefully the audio out line has an optical connection like the AirPort Express and other optical products.

I think this will fit my needs pretty well, as long as a 128 GB RAM upgrade is cheap enough down the line. 256 GB would have been a lot nicer.

And best of all, this configuration will free up at least 4 sq ft of floor space where my Mac Pro and SAS chassis sit. If the computer is quiet enough to sit on the desk, then both the Mac Pro and the Thunderbolt RAID only take up about 1.5 sq ft of room, which would be a tremendous improvement in my office where space is a premium.

Update: I take issue with the complainers who say that the new Mac Pro will lead to a big cable mess. For me, I expect it will be about the same, but take up less floor space:

Tags: ,