Serenius Posted August 2, 2011 Report Posted August 2, 2011 if it's a scam, it sure seems like an elaborate one, and given that it's government funded... it just doesn't seem very likely. Ignorance or troll? You decide. Quote
Rick_D Posted August 2, 2011 Report Posted August 2, 2011 let's just watch their next video see you in a year then Quote
Jetsetlemming Posted August 3, 2011 Report Posted August 3, 2011 Notch brings up a similar thought that I had on watching the video: Even with each dot simply being coordinates and color, the "atom" density they're claiming would lead to absolutely enormous data sizes. Look at how much a drag on performance IO is for Minecraft, and then imagine every block is one "atom" and there are trillions of them instead. Maybe they could claim to be saving on these costs by making everything hollow, idk. But then the point of WHY you'd want this rendering style over polygons and textures and shaders, and all the wonderful things we've come up with to fake detail using those elements comes immediately to mind... Quote
2d-chris Posted August 3, 2011 Report Posted August 3, 2011 it's quite simple, make an actual game that works on new technology and people will start to take interest! Quote
Thrik Posted August 3, 2011 Report Posted August 3, 2011 Notch brings up a similar thought that I had on watching the video: Even with each dot simply being coordinates and color, the "atom" density they're claiming would lead to absolutely enormous data sizes. I think that's kind of the point of this technology isn't it? Floating point data is far from a new concept, it's their increasingly efficient algorithms for using that data without slowdown and huge amounts of overhead that're causing a stir — plus an proper SDK for pulling it all together. I personally think this has a very good chance of being the future. At the moment there're naturally going to be countless naysayers who don't like the idea of having to change the status quo, but this technology is quite clearly better than the current approach. Things like LODing, pop-up, and having to waste huge amounts of time optimising for crappy hardware are suddenly no longer an issue. Detail can now be as sophisticated as you're capable of modelling. Obviously the tools are nowhere near usable for making games, but the sheer amount of money going into this project tells me they're very serious about doing everything required to make it something developers can quite readily start experimenting with and ultimately embracing. Even since last year they've managed to get direct importing from 3D Studio Max or Maya implemented, meaning modellers hardly have to change what they do at all — apart from not having to spend ages trimming polies off that is. It's inevitable that similar bridges will be built for animation, audio, shaders, etc. They just need time. The real question is: why fake it when you don't have to? That's a question developers will seriously start asking themselves in the near future IMO, especially as the prospect of jumping ahead five+ generations of graphics starts sinking in. Quote
kleinluka Posted August 3, 2011 Report Posted August 3, 2011 I remain skeptical until they demonstrate the technology in a real-time animated game situation instead of a static environment with millions of copy pasted meshes. Quote
Jetsetlemming Posted August 3, 2011 Report Posted August 3, 2011 Notch brings up a similar thought that I had on watching the video: Even with each dot simply being coordinates and color, the "atom" density they're claiming would lead to absolutely enormous data sizes. I think that's kind of the point of this technology isn't it? Floating point data is far from a new concept, it's their increasingly efficient algorithms for using that data without slowdown and huge amounts of overhead that're causing a stir — plus an proper SDK for pulling it all together. I personally think this has a very good chance of being the future. At the moment there're naturally going to be countless naysayers who don't like the idea of having to change the status quo, but this technology is quite clearly better than the current approach. Things like LODing, pop-up, and having to waste huge amounts of time optimising for crappy hardware are suddenly no longer an issue. Detail can now be as sophisticated as you're capable of modelling. Obviously the tools are nowhere near usable for making games, but the sheer amount of money going into this project tells me they're very serious about doing everything required to make it something developers can quite readily start experimenting with and ultimately embracing. Even since last year they've managed to get direct importing from 3D Studio Max or Maya implemented, meaning modellers hardly have to change what they do at all — apart from not having to spend ages trimming polies off that is. It's inevitable that similar bridges will be built for animation, audio, shaders, etc. They just need time. The real question is: why fake it when you don't have to? That's a question developers will seriously start asking themselves in the near future IMO, especially as the prospect of jumping ahead five+ generations of graphics starts sinking in. Some of the ideas presented, like having each pixel on the display being specifically looked up and rendered, rather than the entire world being rendered and then translated into a display, seem really smart, and related to raytracing concepts I've heard before. But that could possibly work with polygons... Quote
Rick_D Posted August 3, 2011 Report Posted August 3, 2011 thats essentially what deferred rendering is Quote
twiz Posted August 3, 2011 Report Posted August 3, 2011 The question is : At the maximum level of detail required for a game - the point where the artists no longer have to think/care about poly count - which technology will perform better? Polygon or this? I'd be very surprised if current-gen hardware can handle this technology and look as good as "traditional" tech. But maybe in a few years this will become more feasible. Quote
Froyok Posted August 3, 2011 Author Report Posted August 3, 2011 Response from the devs : http://www.kotaku.com.au/2011/08/minecr ... hics-hype/ New response from Notch : http://notch.tumblr.com/post/8423008802 ... not-a-scam Quote
Sentura Posted August 3, 2011 Report Posted August 3, 2011 The question is : At the maximum level of detail required for a game - the point where the artists no longer have to think/care about poly count - which technology will perform better? Polygon or this? i don't think you can ask about performance at maximum details in that sense, but you can benchmark both technologies on a computer with the same specs and see which runs it better. Quote
Jetsetlemming Posted August 5, 2011 Report Posted August 5, 2011 thats essentially what deferred rendering is oh, neat. I thought it was just a lighting technique or something like that, but then the most advanced programming I've managed so far is auto-balancing data trees (and I am very proud of my trees, thank you ) edit: oh wiki says deferred lighting is a separate related thing, but I understand almost none of the upper half of the article Quote
Gloglebag Posted August 27, 2011 Report Posted August 27, 2011 40 min interview with the main dude, he responds to notch, and the lack of animations in their video's. http://vimeo.com/27522131 Quote
Bunglo Posted August 27, 2011 Report Posted August 27, 2011 Polycount's been crapping over that video for a few weeks now Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.