I’m a Mac (or, “Emery Inside”)

I’m a Mac (though I prefer John Hodgman)

I used to be a PC guy, but have completely gone Mac (MacBook Air, Mac Mini, iPhone, iPad, Jobs Distortion Field Glasses, etc.). But Mac went Emery before Emery went Mac! Proof below:

From http://www.opensource.apple.com/source/Libc/Libc-594.9.1/gen/magazine_malloc.c:

Multithread enhancements for “tiny” allocations introduced February 2008.
These are in the spirit of “Hoard”. See:
Berger, E.D.; McKinley, K.S.; Blumofe, R.D.; Wilson, P.R. (2000).
“Hoard: a scalable memory allocator for multithreaded applications”.
ACM SIGPLAN Notices 35 (11): 117-128. Berger2000.

Retrieved on 2008-02-22.

A Tighter Cinch

My student Charlie Curtsinger pointed out a better alternative to Cinch: BetterTouchTool. The name is not as nice, but unlike Cinch, BetterTouchTool lets you snap windows to corners. By default, these occupy 1/4th of the screen, but the proportions are adjustable. I have only been using it for ten minutes, but it works great – and Charlie says he has been using it for a while without any issues.

It’s a Cinch

Since making the move to Mac, I have discovered and installed some programs that I’ve found quite useful. Here’s one I use every day.

Screen Shot 2013-01-05 at 6.05.34 PMCinch is a window manager that emulates a feature from Windows 7, which has some nice UI innovations (!). With Cinch installed, you can drag a window to the top of the screen, and it zooms it to fill the screen. The nicest part (which I don’t think Windows does) is that if you drag a window to one side of the screen, it fills exactly that half. Tremendously useful on laptops. Seven bucks, totally worth it.


The Times of London has just released its latest ranking of the top Universities in the World. The list is behind a paywall, but here are some fun data points.

* Harvard is #1, CalTech (?!) is #2
* The University of Massachusetts is ranked #56.
* The University of  Cincinnati is ranked #190.

Why mention the University of Cincinnati? Just to point out that my alma mater, the University of Texas at Austin, is not even on the list. making it clearly worse than the University of Cincinnati. Though I think the Times just forgot to put UT-Austin on their giant dartboard.

Not to single out the Times. For years, UT-Austin was ranked #5 in Databases on the US News rankings, with exactly one faculty member doing database research. US News also currently has a separate ranking category for “Programming Language” (sic). Cornell is high on that list, but the two big guns in PL (Pingali & Morrisett) decamped years ago, and another failed to get tenure, so there’s only one PL faculty member left standing.

It now occurs to me that UWashington is also high on the list, and also had exactly one faculty member in PL…the trick, apparently, is to be the last guy standing.

But Mike Ernst has now joined UW, so obviously it will fall out of the rankings, and I’ve got my eyes on that spot.

Hey, Eliot and Yannis, sorry guys, but it’s either you or the rankings — I’m sure your families will understand…


My student Gene and I have just submitted a paper on the Most. Secure. Heap. Ever. 🙂 We plan to release the code soon, initially for Linux platforms. It’s a variant of the DieHard allocator, but with a number of key improvements that make it far more secure – better than all allocators we know of (something the analytical framework in this paper lets us actually evaluate). Missing are some new benchmark results showing that DieHarder performs about as well as or better than the OpenBSD allocator for a number of insanely allocation-intensive programs. Feedback welcome.

DieHarder: Securing the Heap
Gene Novark and Emery D. Berger

Heap-based attacks depend on a combination of memory management errors and an exploitable memory allocator. We analyze a range of widely-deployed memory allocators, including those used in Windows, Linux, FreeBSD, and OpenBSD. We show that despite numerous efforts to improve their security, they remain vulnerable to attack. We present the design and security analysis of DieHarder, a memory allocator that provides the highest degree of security from heap-based attacks of any practical allocator.
UMass CS Tech Report 2010-033

Winning the War on Bugs

This is a draft version of an article to appear in our departmental newsletter, Significant Bits (with links added).

Nearly all software ships with known bugs, and others are just lurking in the code waiting to be discovered. Some bugs are benign; for example, a page might not display correctly in a browser. But more serious bugs cause programs to crash unexpectedly or leave them vulnerable to attack by hackers. These bugs are difficult for programmers to find and fix. Even when the bugs are critical and security-sensitive, it takes an average of one month between initial bug reports and the delivery of a patch.

Rather than waiting for programmers to fix their bugs, or for hackers to find and exploit them, Professor Emery Berger’s group is designing systems to make software bug-proof. These systems allow buggy programs to run correctly, make them resistant to attack, and even automatically find and fix certain bugs. This work, developed jointly with Ben Zorn at Microsoft Research, was an important influence on the design of the Fault-Tolerant Heap that today makes Windows 7 more resistant to errors.

Defending Against Bugs

Berger and Zorn first developed an error-resistant system called DieHard, inspired by the film featuring Bruce Willis as an unstoppable cop.

DieHard attacks the widespread problem of memory errors. Programs written in the C and C++ programming languages – the vast majority of desktop, mobile, and server applications – are susceptible to memory errors. These bugs can lead to crashes, erroneous execution, and security vulnerabilities, and are notoriously costly to repair.

Berger uses a real-estate analogy to explain the problem of memory errors. Almost everything done on a computer uses some amount of memory—each graphic on an open Web page, for example—and when a program is running, it is constantly “renting houses” (chunks of memory) to hold each item, and putting them back on the market when they are no longer needed. Each “house” has only enough square footage for a certain number of bytes.

Programmers can make a wide variety of mistakes when managing their memory. They can unwittingly rent out houses that are still occupied (a dangling pointer error). They can ask for less space than they need, so items will spill over into another “house” (a buffer overflow). A program can even place a house up for rent multiple times (a double free), or even try to rent out a house that doesn’t exist (an invalid free), leading to havoc when the renter shows up. These mistakes can make programs suddenly crash, or worse: they can make a computer exploitable by hackers.

The way “addresses” are assigned also makes computers vulnerable. Houses (memory locations) with especially desirable valuables, like passwords, will always be on the same lot on the same street. If hackers can locate a password once, they can easily locate the password’s address on anyone’s version of the same program.

DieHard attacks these problems in several ways. First, it completely prevents certain memory errors, like double and invalid frees, from having any effect. DieHard keeps important information, like which houses are rented and which are not (heap metadata), out of a hacker’s reach. Most importantly, DieHard randomly assigns addresses—a password that has a downtown address in one session may be in the suburbs next time around. This randomization not only adds security but also increases resilience to errors, reducing the odds that dangling pointer errors or small or moderate overflows will have any effect.

Exterminating the Bugs

While Professor Berger is more than pleased that the DieHard work has influenced the Windows 7 Fault-Tolerant Heap, he hopes that Microsoft will adopt the technology that Zorn, Berger, and his Ph.D. student Gene Novark designed next, called Exterminator. Exterminator not only finds errors but also automatically fixes them. Exterminator uses a variant of DieHard (called DieFast) that constantly scans memory looking for signs of errors. DieFast places “canaries” – specific random numbers – in unused memory. Just like in a coalmine, a “dead” canary means trouble. When DieFast discovers a dead canary, it triggers a report containing the state of memory.

Exterminator next applies forensic analysis to these reports. With information gleaned from several users running a buggy program, Exterminator can pinpoint the source and location of memory errors. From that point on, Exterminator protects the program from that error by “padding” buggy memory requests to prevent overflows, and delaying premature relinquishing of memory to prevent dangling pointer errors.

Berger notes that since Microsoft already gathers information when programs crash, using techniques similar to those in Exterminator would be a natural next step to quickly find and fix memory errors.

Professor Berger is now tackling the problem of concurrency errors – bugs that are becoming more common with the widespread adoption of multicore CPUs. His group recently developed Grace, a system that prevents concurrency errors in C and C++ programs, and Berger hopes that some version of it will also gain widespread adoption as part of an arsenal to protect programs from bugs.

Buffering Now Means Pausing Intermittently, Other CS Terms Redefined For Your Convenience

Netflix-bufferingSeveral video players, when filling their video buffers, report this fact to the user directly, as in, “Video buffering.” I have so far been unable to find any non-computer scientist who actually knows what the term “buffering” means. Googling for “video buffering” reveals two things:

  1. “Video buffering” does not mean “the video is being buffered” but rather, “the video is itself buffering.”
  2. Which is baffling until you read more web postings and learn that “to buffer” means “to start and stop intermittently” (as in, “I totally hate it when the video buffers.” Me too.)

What it actually means (for the non-CS crowd) is best explained by analogy. When you first go to use a garden hose, it takes some time for the water to start flowing out. From then on, as long as the water is on, it flows out of the hose at a constant rate. The garden hose is the buffer. Data is flowing into your computer, but until there’s enough of a flow to provide smooth video, you have to wait for the buffer to fill up. For more info, see Wikipedia. Still, the video player shouldn’t actually use the word “buffering.”

This led me to wonder what people think about other CS jargon words that have leaked out into userland, like “caching” or “logging.” A number of websites say that you should periodically clear your web caches in order to speed up your browser, as in, “clearing your cache can significantly improve the speed and performance of your browser”. Really!?!

(Non-CS folks: That’s exactly backwards. Caching is what speeds up your browser by keeping recently-used images, etc., on disk – in a “cache” – rather than having to fetch them across the network. It’s like having recently checked-out books at home rather than in the library. Clearing the cache – returning all the books to the library – will definitely make reading those books slower.)

Any other CS jargon terms you’ve seen in “the real world”? (And Emacs screeching to a halt and announcing that it’s “garbage collecting” does not count. Emacs is not the real world.)

Emery Berger's Blog

%d bloggers like this: