Archive for December, 2009

Holiday Hiatus

Posted in Life in General, Video Game Development, Video Games with tags , , , , on December 30, 2009 by Dan Amerson

I should have posted ahead of my holiday break to warn any regular readers about the way I take time off. I pretty much disconnect from the world. I don’t check email; I don’t surf the net; I sit around and read, watch movies, and play single player games. Ergo, there’s been a gap in posts. I’m back and backlogged, so here’s some link spam from my feeds and holidays to get things started back:

  • Blood Bowl, a Gamebryo title, is coming to retail in Q1 2010. For folks that are fans of the IP, this is good news.
  • Kim Pallister has an interesting discussion of Amazon ratings and reviews based off another article from Seth Godin. One or both are worth the read.
  • Our friends over at Epic showed off UE3 running on iPhone. That’s cool tech. I’m still not sure what the market/business plan for big engines is on the iPhone, but that doesn’t detract from cool tech.

Also of note, I got a video camera over the break, the Kodak Zi8. I told my wife that I wanted to do some streaming video experiments and she went out and found me a tool. A very thoughtful Christmas present that should lead to some fun.

dba

ACM and Copyright

Posted in Video Game Development with tags , , on December 22, 2009 by Dan Amerson

I keep meaning to rejoin the ACM and get digital library access. There are a lot of great papers in there, and I figured I should read them. Of late though, the ACM has taken a particularly draconian stance on copyright. Here’s a story from the Real-Time Rendering blog where they asked a grad student to take down links to papers, not the papers themselves. That was righted, but today, I hear they are fighting open publication of papers that receive federal money. In other words, you can’t read something you paid for. Stories at Christer Ericson’s blog here and again on Real-Time Rendering. This policy is already in place for items funded by the NIH as noted on the government request for input.

I think I’ll hold off on that ACM membership for now. It may preclude my reading of some papers, but this is getting out of hand.

As a side note, perhaps more general copyright reform is in order. I read a blurb a while back about the ideal copyright period being fourteen years or so to maximize public benefit.

dba

Quick Shot: Intel Parodies and NVIDIA

Posted in Life in General, Video Game Development with tags , , , , , , on December 14, 2009 by Dan Amerson

Edit: Updated the post per Tim Farrar’s comment. He was NVIDIA at the time of his post. That doesn’t change my slant on his opinion. I’ve always thought he was a sharp analytic eye on parallel programming architectures and paradigms.

I was traveling out to California over the weekend, so I didn’t put together that post I promised on next gen architectures. One could argue that I had plenty of time on the plane, but I’ve never been able to work well on planes. My mind is not in the right place, and my laptop is too bulky. The post will come, but it’s likely delayed a week. When I’m in CA, my time is fractured by meetings. For now, a couple of quick things.

After posting previously about Larrabee’s woes, a colleague pointed me to this site. It’s a whole bunch of parody cartoons from NVIDIA about Intel. My first reaction was that NVIDIA wouldn’t be so bold, but the whois looks like NVIDIA from the DNS info, and this article concurs. Some of this cartoons are pretty amusing and accurate including the current, Santa-themed one. I’m sure Intel would love a GPU for Christmas. šŸ™‚

Now, lest I be accused of bashing Intel too much, I should point out that NVIDIA can’t brag too much yet. Their GT200 cards are pretty awesome. However, they haven’t released their Fermi cards which will have D3D11 support, so they are behind as well. By all accounts, Fermi is an awesome architecture. It’s got great double precision with good performance, the ability to schedule multiple warps at once for better throughput, and a huge number of cores. The tech white paper is here, and here’sĀ  a quick note from Tim Farrar whose opinion I trust. (Note that Tim now works at NVIDIA, but his opinions and this post on Fermi predate that employment. and this post went up shortly after his hire.) So, it’s nice for NVIDIA to make fun of Intel, but they are lagging behind a bit themselves. They have their own set of HW woes in what appear to be long dev times or delays to Fermi cards. Right now, AMD is sitting out there with the only D3D11 cards on the market.

dba

Cuse You, Muphy!

Posted in Life in General, Miscellany with tags , on December 10, 2009 by Dan Amerson

I’ve got a big trip next week where I’ll need my laptop to function. What does that mean? The lappy has to break. But does it really break to the point where I’d upgrade? No, just the ‘R’ key stops working. That’s right. I had to cut and paste every ‘R’ you see in this post.

Barg!

dba

Larrabee, Not Dead in the Strictest Sense

Posted in Video Game Development with tags , , , , , on December 10, 2009 by Dan Amerson

In a previous post, I said that Larrabee was dead. That post had some details about why I thought Larrabee wouldn’t have success as more revisions of the HW came out. As a response to that, a co-worker sent me this article which is an analysis of why Intel won’t buy NVIDIA. I wholly agree with the thesis of the argument. Intel won’t buy NVIDIA. However, there are a couple of reasons given in the article that I should cite here:

  • Larrabee isn’t dead. There will be successors to the v1 hardware.
  • Intel views x86 as the superior architecture and puts GPU architecture in general in a camp of things that will fade away.
  • Intel doesn’t see a future for SIMD GPU design. (i.e., It doesn’t do MIMD.)

Based on those points, one can see why Intel would continue to pursue Larrabee. SIMD programming is hard. It’s a waiting game until x86 outpaces SIMD GPUs in terms of utility, and Intel has deep pockets. In this regard, Larrabee isn’t dead. However, I think the timeframe is important. Will Larrabee outpace GPUs in the next 3 years? 5 year? 10 years? When I look at how things are shaping up, I don’t see it happening in the next 5 years. If that doesn’t happen, then Larrabee misses the console cycle which delays things a few more years at minimum just because so many developers will focus on the consoles.

When I say Larrabee is dead, I really mean it from the perspective of someone developing game technology and the perspective of what we’ll need to target for our next two or three revisions of technology. I don’t see Larrabee as an integral part of that vision. I see the next round of consoles, and PCs for that matter, adding some more CPU cores, continuing to have shared or largely shared memory, and having a wealth of spare SIMD compute power. Thinking about it, that would actually make for a good post to write up my logic on that. I don’t have the time right now, but check back.

dba

Front Line Finalist

Posted in Video Game Development, Video Games with tags , , on December 8, 2009 by Dan Amerson

I just saw that Gamebryo LightSpeed made the finalist list this year for the Front Line Awards. Always good to see your hard work recognized in the marketplace. Even if we don’t win, it’s cool to be nominated by your peers.

dba

Larrabee Dead. Not Surprised.

Posted in Video Game Development with tags , on December 7, 2009 by Dan Amerson

I had a lunch discussion last Thursday, and I predicted that Larrabee wouldn’t be a big success during a discussion. Then, on Friday I saw this article. Apparently, I’m more prescient than I thought. There’s a little more input over here on the Real Time Rendering Blog. The first Larrabee card isn’t going to be released, and I don’t think we’ll see great success from any successors. Pun intended. šŸ™‚

When I look at Larrabee, I see a couple of hurdles:

  • It sinks or swims as a GPU.
  • Utilizing all the power requires very wide SIMD.

Larrabee’s biggest problem is figuring out how to get people to buy it. Developers don’t want to spend a lot of effort writing software for it if there’s not a large install base. Conversely, without some sort of software to give enthusiasts a reason to buy it, the install base won’t build. Looking at what’s out there, you have one major pillar for building an install base without a bunch of Larrabee native titles, DirectX gaming performance. Larrabee has to succeed as a DX GPU. Without that, it can’t make it as Larrabee. When you couple that fact with the reality that silicon from AMD and NVIDIA is targetted at being a great GPU first, it’s hard to believe that the very flexible Larrabee cards will be able to compete on a per dollar or per watt basis.

The second argument is that Larrabee will be big because x86 code can run on it with minimal modifications and be massively parallel. This is true. It’s possibly easier than getting code running through CUDA. I don’t know; I haven’t written any code for Larrabee. However, Larrabee uses 16 float wide instructions. I’ll bet money when there’s a report of floating point performance of X teraflops that it requires using those 16 wide instructions. If you port scalar float or SSE vector4 code to Larrabee, you’re still leaving 15/16 or 3/4 of the flops on the table. It will require hand tailored code for Larrabee., and then we go back to my first argument.

dba