Thursday July 2nd, 2015
| By Alex OsbornBatman: Arkham Knight publisher Warner Bros. Interactive was apparently well aware of the technical issues that currently plague the PC version of Rocksteady Studio's trilogy-capper.
According to sources close to Kotaku, the publisher knew the PC version had problems months ago, but decided to ship the game anyway.
"I will say that it’s pretty rich for WB to act like they had no idea the game was in such a horrible state," an anonymous QA tester said. "It’s been like this for months and all the problems we see now were the exact same, unchanged, almost a year ago."
A few days ago, Warner Bros. decided to suspend sales of Arkham Knight on PC in response to an overwhelming number of reports from gamers having issues. We've reached out to Warner Bros. for comment regarding its alleged knowledge of these issues pre-release, and will update as soon as we receive a reply.
That said, the console version of Rocksteady's superhero game has been received incredibly well by critics, with IGN's Dan Stapelton praising the title in his review for its "excellent gameplay variety" and "detailed open world."
That's not good.
Posted By CybrSlydr @ 9:04 AM
Wednesday June 24th, 2015
| Last week at E3, AMD pulled the wraps off its new flagship GPU: the liquid-cooled Radeon R9 Fury X. It’s the most powerful GPU AMD has ever created, as well as the company’s first video card to feature new architecture and a new type of memory. Even before its announcement, speculation ran wild about how it’d compare to Nvidia’s GTX 980 Ti, which is equally priced at $650.
When specs and AMD-provided performance numbers came out late last week, it appeared that the two cards were pretty closely matched—and after some extensive testing, we can confirm that both cards are indeed pretty darn close in performance.
Posted By CybrSlydr @ 11:42 PM
Tuesday June 16th, 2015
| The wait is finally over. A year and a half after the launch of the R200-series, months after Nvidia refreshed its entire GeForce lineup, AMD has lifted the veil off its new Radeon graphics card lineup: the Fury series, powered by revolutionary high-bandwidth memory (HBM), and the company’s brand new Fiji GPU.
Leading the charge is the AMD Radeon R9 Fury X, which is capable of driving Tomb Raider to 45 frames per second at 5K resolution and Ultra settings, and offers 1.5 times the performance per watt of AMD’s previous R9 290X flagship. The graphics card boasts 4,096 stream processors—an incredible jump over the R9 290X’s 2816. The Radeon R9 Fury X packs 8.9 billion transistors, compared to the R9 290X’s 6.3 billion transistors. Given that AMD—like Nvidia—is still using the 28nm manufacturing process, the Fiji GPU itself must be massive.
Fury X has officially launched.
The R9 300 series will ship this coming Thursday.
Fury X Water Cooling Edition is shipping June 24th @ $649
Fury Air cooled is shipping July 14th @ $549
Fury X Specs
8.9 Billion Transistors
Fury Nano is later this summer. Lisa stated 1.5x performance per watt of 290x for Fury X and 2x performance per watt for Fury nano. Pics of Fury X, Nano, and dual Fury in attachments. VSR is coming as well something that has been lacking from AMD for sometime.
I think that surprisingly i'm loving the form factor. I could make one hell of an mITX with two Fury Nanos all on water. Would make for the ultimate lan PC or travel PC. Meanwhile quantum would be smaller than any console and as powerful as a full size tower with a Fury X in it. Seems about the same size as the smallest steam machines.
Fury X industrial design
Fury X product overview
High-Bandwidth Memory with more than 3x the bandwidth per watt over GDDR51 along with a 4096-bit memory interface - the highest AMD GPU memory bandwidth ever.
Sleek industrial styling including GPU Tach activity indicator and LED illumination all in a compact 7.5-inch card that’s all about raw graphics power.
Powerful performance, bleeding-edge technology, liquid-cooled, whisper-quiet and future-ready for extreme 4K and VR gaming.3
Posted By Blackops_2 @ 2:28 PM
Thursday June 11th, 2015
| The virtual reality company, bought by Facebook last year for $2 billion, said Thursday it plans to launch its consumer Rift headset early next year in a partnership with Microsoft that will tie together the Xbox One game console to the Oculus platform.
The final version of the Rift, shown today for the first time, will come with an Xbox One wireless controller, as well as a standing camera to track your head movements and whether you're standing and moving around the room or sitting down. Oculus is also working on hand controllers of its own, called Oculus Touch, that resemble small joysticks with looping rings around the base. That hardware is being designed to bring more realistic hand-motions to VR worlds that will let people interact with the environment.
Posted By CybrSlydr @ 1:40 PM
Wednesday June 10th, 2015
| ...As usual, we have several synthetic benchmarks covered by Videocardz which were taken from 3DMark’s data base which has validated several new cards. Previously, the GeForce GTX 980 Ti synthetic benchmarks covered by Videocardz turned out very true in the sense that they were close to the real benchmarks and we can again expect these results to be close to the cards which hit the market later this month. The benchmark shows the Fiji XT based Radeon Fury X performing faster or 100 3DMarks better than the GeForce GTX 980 Ti in Firestrike Ultra and Extreme (both cards are clocked at reference speeds). The GeForce GTX 980 Ti OC shows additional performance due to higher clock speeds but we can expect the same from the Fury X which is designed for OC with its unique water cooling design which is a beauty. The GeForce GTX Titan X is faster in 3DMark Firestrike Extreme but is slightly slower in 3DMark Firestrike Ultra compared to the Radeon Fury X due to its higher ROP count which benefit in gaming at 4K and higher resolutions.
The Radeon R9 390X and Radeon R9 390 can also be seen in the benchmarks with the Hawaii Pro with faster clock speeds performing slightly ahead of the GeForce GTX 970 while the Radeon R9 390X is shown faster than the Radeon R9 290X due to larger VRAM and higher clock speeds but lags behind the GeForce GTX 980 in both 3DMark benchmarks.
Posted By CybrSlydr @ 1:57 PM
Friday June 5th, 2015
| By Luke Karmali
Valve has announced the first pre-order details for a series of upcoming Steam Machines and when they'll be available.
Though lots of Steam Machines have been announced, only a few will be available to pre-order now along with the Steam Controller. Doing so, however, will allow you to pick the hardware up several weeks before anyone else.
North American gamers will be able to pre-order an Alienware Steam Machine, Steam Link or Steam Controller from GameStop or Steam as of today and get their order on October 16, with the official launch not happening until November 10. This offer is only available for a limited time.
In Europe and Canada, GameStop, EB Games, Micromania, GAME and Steam are offering the Steam Link and Steam Controller for pre-order.
CyberPower's Steam Machine is set to be available directly from its site in all regions. More information on other Steam Machines will be coming in the next few weeks.
For a full breakdown on the variety of models, check out IGN's Steam Machine roundup.
Posted By CybrSlydr @ 11:01 AM
Monday June 1st, 2015
| For much of the last year now, the story of the high-end video card market has been the story of NVIDIA. In September of 2014 the company launched the GeForce GTX 980, the first and at the time most powerful member of their Maxwell 2 architecture, setting a new mark for both power efficiency and performance, securing their lead of high-end of the video card market. NVIDIA then followed that up in March with the launch of the GeForce GTX Titan X, NVIDIA’s true flagship Maxwell part, and a part that only served to further cement their lead.
Based on the very powerful (and very large) GM200 GPU, GTX Titan X is currently untouched in performance. However priced at $1000, it is also currently untouched in price. In NVIDIA’s current lineup there is a rather sizable gap between the $550 GTX 980 and $1000 GTX Titan X, and perhaps more significantly GTX Titan X was the only GM200 part on the market. With NVIDIA launching their fully enabled flagship card first, it was only a matter of time until they released a cheaper card based on a cut-down version of the GM200 GPU in order to fill that pricing hole and to put salvaged GM200s to good use.
Now just a bit over two months since the launch of the GTX Titan X, NVIDIA launching their second GM200 card, GeForce GTX 980 Ti. Based on the aforementioned cut-down version of GM200, GTX 980 Ti is the expected junior version of GTX Titan X, delivering GM200 at a cheaper price point. But calling GTX 980 Ti a cheaper GM200 may be selling it short; “cheaper” implies that GTX 980 Ti is a much lesser card. At $649, GTX 980 Ti is definitely cheaper, but the card that is launching today is not to be underestimated. GTX 980 Ti may be intended to be GTX Titan X’s junior, but with the excellent performance it delivers, GTX 980 Ti may as well be GTX Titan X itself.
Posted By CybrSlydr @ 11:47 AM
| XCOM 2 Announced -- IGN First
Fight to free Earth from alien domination in the PC-exclusive sequel.
By Dan StapletonLast week, 2K teased us with a cryptic website: AdventFuture.org. On it, a supposedly utopian near-future government called the Advent Administration offered a better life for citizens through genetic enhancement, while hackers subversively replaced text and images revealing that Advent isn’t as benevolent as it seems.
Today, IGN First officially announces that the game 2K hinted at is in fact Firaxis Games’ XCOM 2, a full sequel to 2012’s critically acclaimed XCOM: Enemy Unknown and a continuation of the now 20-year-old legendary turn-based tactical squad combat series, which will be available exclusively on PC this November. Watch the debut trailer above for a glimpse into a future where the aliens are in control of the Earth, and XCOM has gone underground to fight to overthrow their Advent government. This new guerrilla force will face more powerful enemies in unpredictable combat scenarios as they fight to turn the tables on a technologically superior enemy.
Over the next month IGN will reveal huge details about XCOM 2, uncovered during our visit to Firaxis’ Maryland studio. Tomorrow you’ll get an in-depth overview and interview with Creative Director Jake Solomon that’ll highlight the major new features, including new soldier classes, new aliens, stealth-infused tactics, procedurally generated maps, and more. Following that, we’ll have a Rewind Theater examination of all the details in today’s trailer reveal with Solomon and Lead Producer Garth DeAngelis.
And, later this week we’ll go in-depth with Solomon and DeAngelis on the reasons behind the bold move to take XCOM from a multiplatform series to a PC exclusive, and how XCOM 2 will be tailored to take advantage of that single platform’s strengths – followed by Firaxis’ exciting plans to support modders and their work.
You might notice a lack of gameplay video, but fear not: in two weeks, we’ll debut the first in-game footage of XCOM 2 on our IGN E3 Live Show.
All of that and much, much more XCOM 2 is coming your way. As a huge XCOM fan, I couldn’t be more excited to tell you all about it. http://oystatic.ignimgs.com/src/core...d_dpad_red.png
Posted By CybrSlydr @ 11:39 AM
Saturday May 23rd, 2015
| Microsoft just rolled out the latest Windows 10 preview build, and apparently a Reddit user is already tinkering around with the preview build of DirectX 12 that comes with it. If you have been on the internet any time after the Redmond-based firm announced the latest API last year at GDC, you would know that there’s a grand debate on whether or not the DirectX 12 will improve the performance of PC and Xbox One hardware by leaps and bounds. So to look into things first-hand, the user decided to test the preview build of the API on his more than 3 year old hardware; Nvidia Geforce GTX 670 and Intel Core i7 2600K, and the results were quite amazing.
According to him, the tests he ran show a boost of close to 400% in draw call throughput. As visible in the image below, the draw call count according to the results on DirectX 11 was 1,515,965 whereas on multi-thread, it was 2,532, 181. Although, when the user switched to DirectX 12, the number of draw calls raised to 8,562,158, which is around 330% increase in the total performance.
The user stated that he was quite surprised to see DirectX 12 already performing really well despite it being a preview build. He also explained how the improvement in performance is being calculated by the benchmark software; “There’s no actual point score. All it’s doing is increasing the number of draw calls by increasing scene complexity. It just keeps going until the framerate drops to 30, then notes the calls/sec and bails. Since it’s only issuing calls for primitives (apparently anyways) it’s actually giving you a solid idea of how raw output is limited by the number of draw calls that can be dispatched.”
Posted By CybrSlydr @ 5:06 PM
Friday May 8th, 2015
Windows 10 is going to be the last major revision of the operating system.
Jerry Nixon, a Microsoft development executive, said in a conference speech this week that Windows 10 would be the "last version" of the dominant desktop software.
His comments were echoed by Microsoft which said it would update Windows in future in an "ongoing manner". Instead of new stand-alone versions, Windows 10 would be improved in regular instalments, the firm said. Mr Nixon made his comments during Microsoft's Ignite conference held in Chicago this week.
In a statement, Microsoft said Mr Nixon's comments reflected a change in the way that it made its software. "Windows will be delivered as a service bringing new innovations and updates in an ongoing manner," it said, adding that it expected there to be a "long future" for Windows.
Which likely means that there will be "service packs" or "tiers" of the software and you have to pay at regular intervals to stay updated. Looks like MS really is going to go for a "fee" based OS from now on
Posted By lexandro @ 3:23 PM
Wednesday May 6th, 2015
| By Jenna Pitcher
The consumer version of the Oculus Rift is expected to begin shipping in Q1 2016, with pre-orders opening “later this year,” Oculus VR announced today.
The consumer model is based on the Crescent Bay prototype and builds upon its "presence, immersion, and comfort." It also features an improved tracking system that accommodates seated and standing experiences, according to Oculus VR, along with updated ergonomics and a tweaked industrial design.
The company will share more details of its hardware, software, input and unannounced speciality games in the coming weeks, beginning with its technical specifications next week.
Speaking at a panel during SXSW in March, Oculus founder Palmer Luckey explained that the tentative launch of the Oculus Rift in late 2015 was made them before they "made a lot of changes to [Oculus'] roadmap." The vague release window was also mirrored by Facebook CFO Dave Wehner during Facebook's Q1 2015 earnings call.
Luckey also confirmed at the time that the Crescent Bay prototype headset uses two screens instead of one, which the company attributes to the Rift's visual quality.
Those interested in developing a “next-generation” VR game or app can find details at the Oculus Developer Center. http://oystatic.ignimgs.com/src/core...d_dpad_red.png
Posted By CybrSlydr @ 4:04 PM
Friday May 1st, 2015
| Today at Microsoft's BUILD 2015 conference, Square Enix and Nvidia showed off a seriously impressive tech demo for DirectX 12 using a whopping four GTX Titan video cards.
The video, entitled Witch Chapter 0 [cry], was made by the same Square Enix team that built the similarly pretty Agni's Philosophy tech demo shown off back at E3 2012. The new video features more than 63 million polygons in each scene, approximately six to ten times as many as was possible using DirectX 11.
This one isn't nearly as flashy as Agni's Philosophy, but the level of detail—particularly in the skin close-ups around the three minute mark—is pretty spectacular. It should be, since it's running on $4000 worth of Titan Xs.
We won't be seeing a playable game that looks quite this great anytime soon, but the technology doesn't seem completely pie-in-the-sky. Here's a quote from Square Enix's Hajime Tabata, the director of Final Fantasy XV, about the tech demo:
"Our team has always pursued cutting-edge pre-rendered and real-time CG. As a part of the technical development, we created this demo using world-class, real-time CG technology with generous support from leading-edge software and hardware providers – Microsoft’s Windows10/DirectX 12 and Nvidia’s GeForce GTX. The efforts from this project will power future game development as well as Final Fantasy XV, currently a work in progress."
Sure sounds like Final Fantasy XV belongs on the PC.
Check out the video above, courtesy of YouTube user Michael Wieczorek.
Source: PC Gamer
Posted By CybrSlydr @ 6:27 PM
Tuesday April 14th, 2015
| By Rachel Paxton-GillilanIf you've ever wanted to legally watch a newly released movie without the trouble of traveling to the theater, then PRIMA Cinema may be for you. Unfortunately, it's only an option for the super wealthy.
PRIMA Cinema allows you to skip the movie theater entirely with a system that lets you screen new releases, many on the same day they're available in the theater, in your home. But the price isn't cheap. Each 24 hour "rental" costs $500 (USD), and obtaining the equipment costs a base of $35,000 (USD) with a new user required to pay for ten movies upfront ($5,000).
The PRIMA box's price and business complexities ensure that the system won't pose a real threat to the theater business any time soon. The Verge, whose writers recently had the opportunity to watch Furious 7 using a PRIMA box, provides a detailed explanation of the security behind the system, which goes to great lengths to prevent pirating.
The box automatically downloads the encrypted movie over a secure connection days before its theatrical release, where it sits until the studio signs off. After it becomes available, authorized users are required to use a biometric thumbprint authorization system to actually rent the movie. Each film comes with a unique (invisible) watermark embedded to prevent piracy, and the rackmount PRIMA box will stop working altogether if it's moved thanks to the equipped accelerometers.
However, most of the complexity is going on behind-the-scenes. The average PRIMA user experience is simple and the movies reportedly look great--though they're not 4K yet. If you've got an extra $40,000 laying around, you can find out more information on PRIMA's website.
Posted By CybrSlydr @ 9:55 AM
Monday April 6th, 2015
| Earlier this month at GDC, AMD introduced their VR technology toolkit, LiquidVR. LiquidVR offers game developers a collection of useful tools and technologies for adding high performance VR to games, including features to make better utilization of multiple GPUs, features to reduce display chain latency, and finally features to reduce rendering latency. Key among the latter features set is support for asynchronous shaders, which is the ability to execute certain shader operations concurrently with other rendering operations, rather than in a traditional serial fashion.
It’s this last item that ended up kicking up a surprisingly deep conversation between myself, AMD’s “Chief Gaming Scientist” Richard Huddy, and other members of AMD’s GDC staff. AMD was keen to show off the performance potential of async shaders, but in the process we reached the realization that to this point AMD hasn’t talked very much about their async execution abilities within the GCN architecture, particularly within a graphics context as opposed to a compute context. While the idea of async shaders is pretty simple – executing shaders concurrently (and yet not in sync with) other operations – it’s a bit less obvious just what the real-world benefits are why this matters. After all, aren’t GPUs already executing a massive number of threads?
With that in mind AMD agreed it was something that needed further consideration, and after a couple of weeks they got back to us (and the rest of the tech press) with further details of their async shader implementation. What AMD came back to us with isn’t necessarily more detail on the hardware itself, but it was a better understanding of how AMD’s execution resources are used in a graphics context, why recent API developments matter, and ultimately why asynchronous shading/computing is only now being tapped in PC games.
Why Asynchronous Shading Wasn’t Accessible Before
AMD has offered multiple Asynchronous Compute Engines (ACEs) since the very first GCN part in 2011, the Tahiti-powered Radeon HD 7970. However prior to now the technical focus on the ACEs was for pure compute workloads, which true to their name allow GCN GPUs to execute compute tasks from multiple queues. It wasn’t until very recently that the ACEs became important for graphical (or rather mixed graphics + compute) workloads.
Why? Well the short answer is that in another stake in the heart of DirectX 11, DirectX 11 wasn’t well suited for asynchronous shading. The same heavily abstracted, driver & OS controlled rendering path that gave DX11 its relatively high CPU overhead and poor multi-core command buffer submission also enforced very stringent processing requirements. DX11 was a serial API through and through, both for command buffer execution and as it turned out shader execution.
Posted By CybrSlydr @ 11:35 AM
| The 14nm tri-gate for process from Intel has currently been seen in both Core M (Broadwell-Y) and Broadwell-U, with some discussions at Mobile World Congress regarding Atom x5 and Atom x7 both featuring 14nm cores at their heart. For the mini-PC and laptop space, Core M fits nicely with a 4.5W TDP and the Core architecture, however Intel’s Atom line also occupies a similar segment but at a lower price point. The upgrade from Bay Trail is Cherry Trail, from 22nm Silvermont cores to 14nm Airmont cores.
Technically it would seem that Cherry Trail is a catch-all name with the SoCs intended for mini-PCs will also ride under the name ‘Braswell’, using up to four Atom cores and Generation 8 graphics within a 4-6W TDP.
CPU World recently published details of four Braswell SKUs. For Braswell, similar to Bay Trail, Intel designs its Atom SoCs in terms of dual core modules, where each core is separate apart from a shared L2 cache. The SoC then puts one or two of these modules on die (for two or four cores) without an overriding L3 cache. The Braswell SoCs will support DDR3-1600 memory, with SIMD instructions up to SSE4 with support for VT-x and Burst Performance Technology offering higher clocks for extremely short periods when required.
Posted By CybrSlydr @ 11:33 AM