Thursday January 5th, 2017
| At AMD's Tech Summit in December, press, partners, and analysts were briefed on some of AMD's upcoming products; today we can finally talk about everything we saw. I've already talked a lot about Zen/Ryzen, but for gamers the bigger news is Vega. AMD gave us a roadmap last year listing their plans for GPU architectures: first Polaris, then Vega, and after that Navi. Polaris targeted the mainstream gaming audience, with good performance and efficiency, but Vega sets its sights higher, with a release target of "first half, 2017"—probably June, judging by AMD's past history.
Along with working silicon, AMD has now released the first official details on Vega, and it's shaping up to be, *ahem*, out of this world.
Vega builds on everything that makes Polaris great, but it's not simply a bigger chip with more cores. AMD didn't provide Vega's core count or clock speed, but it will likely be 4,096 cores clocked at around 1.5-1.6GHz. The reason I can be so specific is that AMD also announced a new line of machine intelligence accelerators, called Radeon Instinct MI6, MI8, and MI25. The MI25 uses Vega and will provide up to 25 TFLOPS (with FP16—half that for FP32), which means the baseline for Vega is about 45 percent faster than the Fury X. Chew on that for a minute—45 percent faster than Fury X should put it well above the performance level of the GTX 1080, possibly even eclipsing the Titan X (and thereby the 1080 Ti).
It's not just about TFLOPS, however. AMD has reworked several key elements of their GCN architecture, a major one being the memory subsystem. Vega includes 8GB (possibly 16GB) of HBM2 memory in two stacks. These deliver the same 512GB/s bandwidth as the four stacks of HBM1 in Fiji, but with two stacks the silicon interposer doesn't need to be as large, and HBM2 densities allow AMD to double (potentially quadruple) the amount of memory. We've seen quite a few instances where 4GB can limit performance, so Vega takes care of that problem.
But AMD isn't just calling this HBM or VRAM; it's now a "High-Bandwidth Cache" (HBC) and there's also a new "High-Bandwidth Cache Controller" (HBCC). The distinction is important, because the HBCC plays a much more prominent role in memory accesses. AMD calls this a "completely new memory hierarchy." That's probably a bit of hyperbole, but the idea is to better enable the GPU to work with large data sets, which is becoming an increasingly difficult problem.
As an example of why the HBCC is important, AMD profiled VRAM use for The Witcher 3 and Fallout 4. In both cases, the amount of VRAM allocated is around 2-3 times larger than the amount of VRAM actually 'touched' (accessed) during gameplay. The HBCC takes this into account, allowing the GPU to potentially work with significantly larger data sets, providing a 512TB virtual address space.
AMD also demonstrated a real-time physically rendered image of a house using more than 600GB of data, running on what I assume is an 8GB Vega card. If the HBCC works properly, even a 4GB card could potentially behave more like an 8-12GB VRAM card, while an 8GB card would equal a 16-24GB card.
Vega also has a new geometry pipeline. Similar to the VRAM use, AMD notes that there can be a 100X difference between polygons in a scene and those that are visible on the screen. To help, the new geometry engine will have over twice the throughput per clock compared to AMD's previous architecture. The compute unit is also improved, with native support for packed FP16 operations, which should prove very useful for machine learning applications. AMD's Mike Mantor also stated, "We've spent a lot of time tuning and tweaking to get our frequencies up—significantly—and power down," though note that the Radeon Instinct MI25 still has a "<300W" TDP.
Finally, AMD improved the pixel engine, with a new Draw Stream Binning Rasterizer that helps cull pixels that aren't visible in the final scene. All the render back-ends are also clients of the cache now, reducing the number of memory accesses (e.g., for when the pixel and shader pipelines both access the same texture). This should provide significant performance improvements with deferred rendering engines, which is what many modern games are using.
Based purely on the raw performance numbers, Vega would be impressive, but factor in the other changes and AMD's currently superior DX12/Vulkan performance, and we're looking at another exciting year in graphics cards. The GTX 1080 leads the Fury X by around 30 percent on average (less at 4K), so a 45 percent boost would put Vega well ahead, and if the architecture improvements can add another 10-15 percent Vega might even match or exceed Titan X. AMD has already demoed Doom running at 4K ultra and 65-75 fps (on a Ryzen system, no less), backing up that performance estimate.
For graphics junkies like me, June can't come soon enough.
Source: PC Gamer
Posted By CybrSlydr @ 4:51 PM
Friday December 23rd, 2016
| Steam’s Winter Sale went live yesterday, and the service has succumbed to what must have been a prolonged assault of shoppers trying to get the best deals on PC games. Steam, as of this writing, is completely down. Like, all of it.
We’ve contacted Valve to see if holiday traffic is the reason for the outage, as well as asking about a timetable for the return of normal operations. As we’re entering into the holiday weekend it’s very possible that, while engineers are likely working hard to make sure the servers get back up in a timely fashion, we cannot say the same for responses to press inquiries.
Your purchases will have to wait, and your voting for the Steam awards will have to take a backseat to the act of living the rest of your life. We hope for the return of Steam soon, or at least for some official communication from Valve about what is going on, and when we can expect it back.
Posted By CybrSlydr @ 11:13 AM
Thursday December 22nd, 2016
| They say ’tis the season for giving, and it looks like CD Projekt Red has been allowed to open one of their presents a few days before December 25: a cash bounty from the Polish government to the tune of 30 million zloty (that’s $7 million US dollars or £5.6 million Queen’s megapounds), according to a report from WCCFTech.
To make off with those hefty stacks from a total jackpot of 116 million zloty, the Witcher 3 developer submitted four proposals to the Polish National Center for Research of Development, along with one additional one relating to cross-platform development of GOG.
Let’s have a look at what the proposals were about, shall we, and then we can sit back and (perhaps excitedly, it is Christmas after all) speculate on what the studio could possibly be working on next:
City CreationElsewhere in the list of winners, Dying Light developer Techland saw their bank accounts fill up after promising a prototype of a first-person fantasy RPG, with money also reaching the coffers of CI Games, The Farm 51, and Bloober Team.
Comprehensive technology for the creation of “live”, playable in real-time, cities of great scale based on the principles of artificial intelligence and automation and taking into account the development of innovative processes and tools supporting the creation of high-quality open world games.
Comprehensive technology enables the creation of unique gameplay for many players, taking into account the search of opponents, session management, replication facilities, and support of a variety of game modes along with a unique set of dedicated tools.
Comprehensive technology for providing a unique, film quality RPG with open world, also taking into account innovative solutions Process and unique set of dedicated tools.
Comprehensive technology enabling a significant increase in quality and production of complex face and body animations for open world RPG games, also taking into account the innovative process solutions and a unique set of dedicated tools.
In a statement, CD Projekt Red boss Adam Kicinski said the resulting schemes would “enable Polish developers to carry out nearly 40 projects worth 191 million PLN.” Even without staring into our crystal ball (PLEASE LET IT BE CYBERPUNK 2077) and looking at the future, seeing this investment into our industry on the world stage gives me a warm, fuzzy feeling… or maybe that’s just all the glühwein kicking in.
Posted By CybrSlydr @ 9:54 AM
Tuesday November 29th, 2016
| I’ve had a go at Factorio [official site], and even managed to automate resource mining and production. I thought I had a good grip on the game until I saw what DaveMcW was able to create. Using just the components available in the base game, he managed to build what is essentially a video stream decoder and display program.
DaveMcW built a massive complex, composed of a display, memory, and decoder sections, then replicated it via Blueprint 10 times to make a 178×100 pixel display with a total of 34MB of memory. These might not seem like impressive numbers until you factor in that Factorio doesn’t have a ton of built-in code interpretation, so the whole thing was mainly coded in Assembly.
All of this hard work went into playing the music video for Darude’s “Sandstorm,” but it seems like any video could be played on the massive array of small display modules. “Sandstorm” just happens to be one of the best choices that could have been used here. If you’re looking to use DaveMcW’s design, or build something similar, he goes in-depth on how he created the video player on the Factorio forums.
Source: Rock, Paper, Shotgun
Posted By CybrSlydr @ 9:51 PM
Saturday November 19th, 2016
| You might have thought that when Asus debuted its water-cooled GX700 laptop last year that it would be a one-and-done design, but you'd be wrong. Asus is at it again, this time with the ROG GX800, a similar looking system that's even bigger and more powerful.
Instead of a 17-inch panel, Asus supersized the ROG GX800 with an 18.4-inch display. It still boasts a 4K (3840x2160) resolution, as anything less could be deemed silly on such a sensible system (just a bit of slight sarcasm there), along with 100 percent coverage of the RGB color space. Oh, and it supports G-Sync, too.
Underneath the massive hood is an Intel Core i7-6820HK processor that's begging to be overclocked and not one, but two GeForce GTX 1080 GPUs in SLI. Yeah, it's like that.
This is a no-compromise laptop. Well, except for the obvious—portability. It's big (458x338x454mm) and heavy (5.7kg) all on its own, but add the Hydro Overclocking System, which tacks on another 4.7kg, and you're looking at staying stationary for a spell.
The cooling system is the most unique thing about the GX800. It's essentially a liquid cooling dock that allows you to overclock the CPU, GPU, and RAM without fear of the thing cooking itself.
"With the Hydro Overclocking System, ROG GX800’s Intel K-SKU CPU can be overclocked to 4.2GHz so you get mind-blowing levels of performance. The graphics cards can be overclocked to 1961MHz, while VRAM and DRAM can be pushed up to 5,200MHz and 2,800MHz respectively," Asus claims.
Configurations will come with up to 64GB of DDR4-2800 RAM. It also supports up to three M.2 PCIe-based SSDs in RAID 0 and has built in 802.11ac Wi-Fi, Bluetooth 4.1, two USB 3.1 Type C ports, three USB 3.0 ports, separate microphone and headphone jacks, a GbE LAN port, HDMI and mini DisplayPort output, and a memory card reader.
Asus didn't say when the GX800 will be available or for how much, though with the GX700 selling for around $4,700, we suspect this one will top the $5,000 mark.
Source: PC Gamer
Posted By CybrSlydr @ 9:02 AM
Sunday October 16th, 2016
| Purchasing a new video game used to be simple. You’d go down to the local game store, slam sixty dollars on the counter, and bring home your brand new copy of Call of Battlefallverwatch 7: Multiversal Warfare. But as Willie Nelson once said, “the times they are a-changin.”
Now, each new big release boasts a million different collector’s editions with pre-order bonuses that barely fit inside the box, and they typically retail for a hundred dollars or more. Meanwhile, logging on to Steam or another digital distribution site gives you a chance of purchasing a AAA title for well below the sixty-dollar price point. Then there’s the indie-market, which has begun to offer innovative games for around ten to twenty dollars. It seems like pricing is becoming less and less standard as the gaming landscape becomes more and more complex.
Who sets these prices and, if the sixty-dollar game is really on its way out, what is preventing them from charging us even more in the future? To answer that, we have to examine why games were priced at sixty dollars to begin with.
And to understand that, we first have to look at the economics of video game retail.
Source: Game Crate
Posted By CybrSlydr @ 5:25 PM
Wednesday September 28th, 2016
| Hell yeah.
We need to learn a lesson about needless consumerism from this auto repair shop in Gdansk, Poland. Because it still uses a Commodore 64 to run its operations. Yes, the same Commodore 64 released 34 years ago that clocked in at 1 MHz and had 64 kilobytes of RAM. It came out in 1982, was discontinued in 1994, but it’s still used to run a freaking company in 2016. That’s awesome.
To be sure, small businesses around the world often use technology that’s a bit more outdated than what the rest of us use in our daily lives but ****, flexing a Commodore 64 for work in a time when babies are given smartphones before pacifiers is pretty **** bad ***.
Here’s what Commodore USA’s Facebook page wrote regarding the computer:
This C64C used by a small auto repair shop for balancing driveshafts has been working non-stop for over 25 years! And despite surviving a flood it is still going...I know where I’m going if my car ever breaks down in Poland.
Posted By CybrSlydr @ 9:16 PM
Wednesday September 7th, 2016
| Nvidia's done a good job so far of fleshing out its high-end and mid-range Pascal offerings, but what about gamers on a tighter budget? That's where the GeForce GTX 1050 will likely come into play. Word on the web is that it's bound for an October release with a spec sheet that's similar to Nvidia's previous generation GeForce GTX 950.
That's coming from the folks at Benchlife, a Chinese-language website that posted a CPU-Z screenshot of the card's specs. Assuming it's the real deal, the GTX 1050 will sport a GP107 GPU with 768 CUDA cores. Before we get into the other specs, let's have a look at the Pascal parts that are already out there.
According to the CPU-Z screenshot, the GeForce GTX 1050 will have up to 4GB of GDDR5 memory on a 128-bit wide bus. It will also feature 1316MHz (base) and 1380MHz (boost) clockspeeds, a 7Gbps memory clock, a texture fill rate of 84.2 GTexel/s, and 112.1GB/s of memory bandwidth.
- Titan X: GP102 (3,584 CUDA cores @ 1417MHz, 384-bit memory interface)
- GTX 1080: GP104 (2,560 CUDA cores @ 1607MHz, 256-bit memory interface)
- GTX 1070: GP104 (1,920 CUDA cores @ 1506MHz, 256-bit memory interface)
- GTX 1060 6GB: GP106 (1,280 CUDA cores @ 1506MHz, 192-bit memory interface)
- GTX 1060 3GB: GP106 (1,152 CUDA cores @ 1506MHz, 192-bit memory interface)
The CUDA count is the same as the GeForce GTX 950, but clockspeeds are faster—the GTX 950 has base and boost clocks of 1,024MHz and 1,188MHz, respectively, along with a 6,600 Gbps memory clock, 49.2 GTexel/s texture fill rate, and 105.6GB/s memory bandwidth.
In short, the GeForce GTX 1050 is a faster clocked GeForce GTX 950 with an upgraded GPU built on a 16nm manufacturing process. It will have a lower TDP at 75W compared to 90W, and won't require a PCI-E power cable, unless a third party deviates from the reference design.
There's no word on pricing, but based on the GTX 1060 3GB, we expect the GTX 1050 to target the $150 market, give or take.
Source: PC Gamer
Posted By CybrSlydr @ 1:09 PM
Wednesday August 31st, 2016
| It weighs 17 pounds, or around 8 kilograms -- a serious bit of heft, as we can attest from getting our hands on it at IFA here in Berlin.
It's the world's first laptop with a curved screen...not to mention two (2) GeForce GTX 1080 GPUs and a built-in mechanical keyboard.
It requires two (2) power supplies to run, and needs five (5) system fans and eight (8) heatpipes to stay cool. It holds up to 64GB of memory and five (5) storage drives at a time.
There's a Tobii eye-tracking camera so you can aim at foes just by looking at them. (Supported ones, anyhow.)
Oh, and this laptop has four (4) speakers and two (2) subwoofers. So you can blast while you blast, of course.
The curved screen measures 21 inches diagonally. (Typically, laptops top out at 17 or 18 inches). It's an Nvidia G-Sync screen, too.
The mechanical keyboard uses Cherry MX switches and has an RGB LED under each and every key...because who doesn't like colors?
Lastly, I'd like to bookend this article by reminding you: The Predator 21 X weighs 17 pounds.
In short, it's the most ridiculous gaming laptop ever conceived. It's more powerful than our CNET Future-Proof VR Gaming Desktop, and probably weighs as much. It likely costs a good deal more. Acer's Europe head told CNET's Roger Cheng that it will fetch a price north of $5,000 (£3,820 or AU£6,661).
We need one in the CNET offices yesterday. But you'll have to wait until the first quarter of 2017 to own one.
Start packing away those pennies, people.
Posted By CybrSlydr @ 2:59 PM
Tuesday August 30th, 2016
| Intel today officially took the wraps off Kaby Lake, its 7th generation Core processor architecture that slides in between Skylake (out now) and Cannonlake (due out in 2017).
One of the interesting things about Kaby Lake is that it throws a wrench into Intel's "tick-tock" release cadence that's guided its processor release for nearly a decade. So-called "ticks" are new process nodes while "tocks" represent a brand new architecture on the same node. So for example:
Under normal circumstances, Intel's 10nm Cannonlake architecture would have debuted next, but Cannonlake isn't due to arrive until late 2017. In the meantime, we have Kaby Lake, another 14nm architecture and the a change to Intel's tick-tock cycle. The new pattern (for now) will be Process, Architecture, Optimization, with Kaby Lake being the "Optimization" for the 14nm node. Which is probably a better sequence than "hickory-dickory-dock."
- 32nm Westmere (tick)
- 32nm Sandy Bridge (tock)
- 22nm Ivy Bridge (tick)
- 22nm Haswell (tock)
- 14nm Broadwell (tick)
- 14nm Skylake (tock)
Either way, Kaby Lake gets an official unveiling today with Intel promising up to 12 percent faster productivity performance and up to 19 percent faster web performance compared to Skylake. It's also pushing Kaby Lake as the appropriate choice for creating and editing 4K content, noting that it has the power to do such things up to 15 times faster than a 5-year-old PC (though why anyone would attempt 4K editing on a 2011 PC is beyond us).
Before you start making plans to upgrade your desktop, take a breath and relax. Mobile users get first dibs on Kaby Lake. "We are incredibly excited about the strong partnership with our OEM customers and expect more than 100 different 2-in-1s and laptops powered by 7th Gen Intel Core to be available starting in September through this holiday season. We will share more on the rest of the 7th Gen Intel Core family for desktops and enterprise PCs early next year," Intel said.
Today's launch consist of half a dozen processors split equally between its Y-series for low-power systems such as 2-in-1 devices, and its U-series for meatier laptops.
Looking ahead to what OEMs will do with these processors, Intel says to expect thinner convertibles measuring 10mm, slimmer clamshell laptops checking in at under 10mm, and fanless detachables with waistlines less than 7mm.
Intel also promises better gaming performance from the upcoming crop of mobile products. If we're again comparing to a 5-year-old PC, Intel claims a three-fold improvement in games like Overwatch. Intel bases that metric on pitting a Core i5-7200U against a Core i5-2467M in 3DMark's Cloud Gate test. Not that anyone would actually try gaming on an i5-2467M these days.
How things really fare is something we'll explore as Kaby Lake trickles into retail.
Source: PC Gamer
Posted By CybrSlydr @ 2:32 PM
| Samsung announced three new curved gaming monitors that employ the same quantum dot technology found in its TV lineup. They include the CFG70 available in 24-inch and 27-inch models, and the CF791 available in a larger 34-inch model.
Quantum dots are nano-sized particles that display different colored light based on their diameter. Displays using quantum dot technology are hyped to rival the quality of OLED panels, though current generation quantum dot solutions still rely on an LED backlight. Even so, Samsung claims both new monitor series offer up vivid and crisp colors while requiring less energy.
"Both monitors express brilliant color across a 125 percent sRGB spectrum, giving greater depth to blacks and sharpening color intricacies. These color distinctions increase the nuances of game play and far surpass display offerings available in conventional monitors," Samsung says. Samsung plays up the immersion factor due to the curvature of both monitor series, which for the CF791 is rated at 1,500R and for the CFG70 is rated at 1,800R.
The two CFG70 monitors both offer a 1920x1080 resolution with a 144Hz refresh rate and 1ms response time. They also have height adjustable stands that support tilt, swivel, and pivot. Why anyone would want to pivot a curved display into portrait mode is a different matter. Samsung's CF791 boasts a 3440x1440 resolution with a 100Hz refresh rate and 4ms response time. Its stand is height and tilt adjustable, but doesn't support swivel or pivot.
All three models have a single DisplayPort and two HDMI ports, as well as a headphone jack. Only the CF791 has built-in speakers (a pair of 7W cans) and a two-port USB hub. All three displays also support variable refresh rates via FreeSync.
Pricing for the CFG70 is $399 for the 24-inch model and $499 for the 27-inch SKU, while the CF791 costs $999. All three will be available in the fourth quarter.
Source: PC Gamer
Posted By CybrSlydr @ 2:29 PM
| While it had been rumored that GDDR6 memory would end up on graphics cards released next year, it looks like we'll have to wait a little bit longer, at least from Samsung's vantage point. Samsung announced at ISCA 2016 in Seoul, Korea, that graphics cards wielding the next generation memory standard won't come out until 2018, according to multiple reports.
Digital Trends was in attendance at the convention and says that one of the slides Samsung presented indicated that GDDR6 will offer more than 14Gbps (gigabits per second) of bandwidth. That trumps the up to 12Gbps offered by Micron's GDDR5X memory chips featured on Nvidia's GeForce GTX 1080, and of course it's more than the 10Gbps offered by standard GDDR5 memory.
On a 256-bit bus, GDDR6 could push up to 448GB/s (gigabytes per second), and up to 672GB/s on a 384-bit bus. As a point of reference, the aforementioned GTX 1080 pumps 320GB/s by way of a 256-bit wide bus and the Titan X does 480GB/s on a 384-bit bus. HBM2 and HBM3 will offer even more bandwidth, but the cost is higher and GDDR6 will likely find its way into many graphics cards.
The added bandwidth and presumably larger amounts of onboard memory could help with higher resolution gaming and the push for VR content in future generation graphics cards. In addition to more bandwidth, Samsung is focusing on reducing power consumption. Exactly to what extent isn't yet known, though GDDR5 was able to achieve up to a 60 percent gain in power efficiency compared to GDDR4.
Source: PC Gamer
Posted By CybrSlydr @ 7:40 AM
Monday August 22nd, 2016
| By: Anthony Garreffa
HBM3 is being worked on by SK Hynix and Samsung and will offer up to 64GB VRAM at higher speeds than HBM2, but a low-cost version of HBM is also in the works, which will feature less bandwidth but a lower cost point than HBM1 and HBM2.
The new low-cost HBM will feature increased pin speeds, from the 2Gbps on HBM2 to around 3Gbps on the new low-cost HBM while the memory bandwidth shifts from 256GB/sec per DRAM stack, to around 200GB/sec per stack. This means the upcoming low-cost HBM could reach the mass market, so we could be looking at HBM-powered notebooks and consumer graphics cards, more so than just the three from AMD that we have now in the Radeon R9 Fury X, Radeon R9 Fury and R9 Nano graphics cards.
Posted By CybrSlydr @ 8:08 AM
Thursday August 18th, 2016
| AMD said its Summit Ridge CPU, aimed at high-performance desktops, will pack 8 cores and feature simultaneous multi-threading technology to give it 16 threads of processing power. Gone are the shared, clustered multi-thread cores of the previous Bulldozer and Piledriver designs—Zen’s cores are stand-alone cores with SMT. To prove that Zen has the right stuff, AMD officials on Wednesday night demonstrated before a crowd of reporters and analysts that an 8-core Zen could run just as fast as Intel’s newest 8-core consumer Core i7 chip.
Source: PC World
Posted By FunkZ @ 12:04 PM
Wednesday August 17th, 2016
| The PC industry's glory days, when people snapped up new computers powered by steadily faster chips, are over. But Intel thinks its newest PC processor will get some hearts racing in a few months.
At its Intel Developer Forum in San Francisco on Tuesday, Intel Chief Executive Brian Krzanich showed PCs powered by a seventh-generation Core processor in PCs handling some demanding chores -- editing high-resolution 4K GoPro video and playing a hot new first-person shooter game, Overwatch.
It's "the highest performance CPU Intel has ever built. It'll make rich experiences available to everyone," Krzanich said. "We're shipping seventh-generation Core already to our PC partners and will launch devices to consumers this fall."
The seventh-generation Core processor, code-named Kaby Lake, is the first PC chip to emerge since Intel slowed its "tick-tock" pace of processor development. It previously introduced new chip designs and new manufacturing technology in alternating years, but Kaby Lake just refines an existing design on an existing manufacturing process.
The slower cadence isn't the only trouble for Intel. The steady improvement in processor clock speeds has largely stalled, PC sales are shrinking and consumers have flocked to smartphones powered by other companies' chips. But Krzanich is optimistic about Moore's Law, the observation named after Intel co-founder Gordon Moore that the number of electronic components on a chip doubles every two years.
"Moore's Law is far from dead," Krzanich said.
Posted By CybrSlydr @ 3:21 PM