Cascaval et al.'s skepticism on transaction memory

Alexa Weber-Morales of Parallelaware asked me my views on "Software Transactional Memory: Why Is It Only A Research Toy?" given my recent column predicting that the stars are aligning for STM as the model of choice for the manycore era. (Incidentally, if they ever bring back Schoolhouse Rock, my suggested title for that column was "Transaction Faction Gaining Traction." The song practically writes itself!)

The Cascaval article is very interesting. Although it's the most pessimistic thing I've seen about STM, I tend to give lots of credence to people who say "We tried this for two years and it failed." On the other hand, the core of their complaints are "muddled semantics" and performance issues, both of which are fast-moving areas.

The Harris et al. paper addresses several areas of semantics, including exceptions, and it would be fascinating to hear Cascaval et al.'s reaction to that paper (and vice versa).

Performance... there's not a doubt in my mind that TM will only be practical with some level of hardware support. I'll go further and say that there's not a doubt in my mind that whatever concurrent programming model succeeds will require some level of hardware support. I don't think that's news. The challenge is making sure that you build hardware that's consistent, which fundamentally boils right back to the semantic issue. Without a calculus for this stuff, the hardware guys are flying a little blind.

So I look at these two articles and think that it's a little bit of Cascaval saying "the glass is mostly empty" and the Harris article filling up the cup a little and saying "looks like the cup is pretty full."