Thursday February 23rd, 2017
| Ryzen is here. AMD said Wednesday that it plans a “hard launch” of its first three Ryzen processors on March 2. The highly anticipated chips promise to outperform high-end parts from Intel and undercut their prices by as much as 54 percent.
AMD executives confidently unveiled the first three desktop chips to attack Intel’s Core i7, supported by several top-tier motherboard vendors and boutique system builders. In many cases, executives said, AMD will offer more for less, as early Ryzen benchmarks prove. The top-tier Ryzen 7 1800X will cost less than half of what Intel’s thousand-dollar Core i7-6900K chip does—and outperform it, too. You can preorder Ryzen chips and systems from 180 retailers and system integrators today.
Like Intel, AMD’s Ryzen offerings consist of three new chip families: the premium Ryzen 7, the midrange Ryzen 5, and the cheapest Ryzen 3. AMD is rolling out its fastest, premium Ryzen 7 chips first, including the Ryzen 7 1800X ($499), the Ryzen 7 1700X ($399) and Ryzen 7 1700 ($329). AMD’s Ryzen 5 and the Ryzen 3 will ship later this year—at the moment, AMD’s not saying exactly when.
Why this matters: About the only major aspect of Ryzen that AMD hadn’t yet disclosed was its price and availability. Analysts say AMD appears to have done its homework, leaving Intel in danger of giving up market share in the bread-and-butter PC microprocessors that built its company. But Intel has its ways: Possible responses include price cuts, additional chips with more cores, and promoting its new Optane technology, they said.
Source: PC World
Posted By CybrSlydr @ 5:17 PM
Wednesday February 22nd, 2017
| AMD's first Ryzen CPUs are finally here. This morning, the company is putting its highest-end chips up for pre-order. The Ryzen 7 1800X, the Ryzen 7 1700X, and the Ryzen 7 1700 will mark AMD's return to the high-performance desktop CPU market, and the company's final internal numbers ahead of launch suggest they'll mark a return to competitiveness, as well.
We're flying home with Ryzen review samples as of this writing. Independent review results are still under NDA, but these are the three CPUs that will mark AMD's return to competitiveness in the marketplace:
One of the most oft-repeated numbers from the lead-up to Ryzen has been AMD's goal of a "40% IPC increase" relative to its older processor generations. CEO Lisa Su claimes the company has not only met, but exceeded that goal with a 52% improvement.
Source: Tech Report
Posted By CybrSlydr @ 4:45 PM
Friday February 17th, 2017
| The annual Game Developers Conference usually maintains an intense software focus, but GDC 2017 seems to be shaping up as a hit for PC hardware enthusiasts, too. Both AMD and Nvidia have announced events taking place alongside GDC, and in fact, both are being held on the very same day—February 28.
First up: AMD, which is reviving the “Capsaicin” event that debuted at GDC 2016. Here’s what’s on tap, according to the event registration:
“The Capsaicin livestream kicks off at 10:30 a.m. from Ruby Skye, a feature-packed show highlighting the hottest new graphics and VR technologies propelling the games industry forward.”That sure makes it sound like more Radeon Vega graphics card details are on the way, doesn’t it? But don’t be surprised if AMD’s hotly anticipated Ryzen processors steal some of the spotlight, too.
A March 2 GDC session dedicated to the chips enticed developers with “Join AMD Game Engineering team members for an introduction to the recently-launched AMD Ryzen CPU followed by advanced optimization topics.” Recently launched, huh? That verbiage has since been scrubbed from the listing, but during a recent financial call, AMD CEO Lisa Su said that Ryzen will launch in early March. Considering all that, a Ryzen appearance at Capsaicin 2017 seems very possible indeed.
Source: PC World
Posted By CybrSlydr @ 7:33 PM
| If you thought the Ryzen hype train might start to slow down after weeks of leaks and rumors, think again. The closer we get to Ryzen's release (likely early March, with a possible announcement on February during AMD's Capsaicin event at GDC 2017), the juicier the leaks, it seems. Case in point, the latest bit of Ryzen news involves some interesting benchmarks of the Ryzen 5 1600X 6-core processor.
A quick recap is in order before we get to the numbers. AMD's Ryzen 5 1600X is one of 17 Ryzen SKUs AMD is expected to launch. The entire lineup will consist of 4-core, 6-core, and 8-core processors spread out between Ryzen 3, Ryzen 5, and Ryzen 7 series. There will be eight Ryzen 5 processors evenly split between 4-core and 6-core processors. The Ryzen 5 1600X is the fastest of the 6-core lineup—it has a 3.3GHz base clockspeed and a 3.7GHz boost, along with 3MB of L2 cache.
With the launch being so close, it's a reasonable assumption that several Ryzen processors are out in the wild, hence today's round of leaked benchmarks. These results emerged on a Chinese-language web forum. One of them shows performance metrics from CPU-Z's built-in benchmarking utility. As you can see in the screenshot above, the chip is running with a core voltage of 0.374v. We can also observe that the processor's boost clock kicked in, with the CPU ramping up to 3.56GHz.
The Ryzen 5 1600X scored 1,888 in single-threaded performance and 12,544 in multi-threaded. Those are sort of meaningless numbers by themselves so we fired up the same benchmark and ran it with Intel's Core i7-5960X, an 8-core Haswell-E chip clocked at 3GHz to 3.5GHz; an older Core i7-4970K, a 4-core Devil's Canyon (Haswell) chip clocked at 4GHz to 4.4GHz; a Core i7-4960X, a 6-core Ivy Bridge-E processor clocked at 3.6GHz to 4GHz; and and a Core i7-6900K, an 8-core CPU clocked at 3.2GHz to 3.7GHz. To flesh things out, we also included a validated benchmark run from an Intel Core i7-5930K (6-core Haswell-E, 3.5GHz to 3.7GHz) that appears in CPU-Z's database, as it matches the Ryzen 5 1600X in core count and clockspeed.Here's how things shook out:
Source: Hot Hardware
Posted By CybrSlydr @ 7:00 PM
Monday February 13th, 2017
| Valve, which recently replaced the Steam Greenlight program with Steam Direct, is currently working on three virtual reality games.
Valve president and cofounder Gabe Newell did not reveal any details yet regarding the three upcoming VR titles, aside from the fact that they will be full-blown titles as opposed to being short and simple tech demos.
Newell made the statement in a media roundtable that was held at the company's headquarters last week. One other thing that he discussed during the session is how Valve is not interested in making games for consoles such as Sony's PlayStation 4 and Microsoft's Xbox One at the moment.
Valve Not Interested In Making Console Games
For the previous generation of consoles, Valve released several titles, including Counter-Strike: GO, the two installments of Left 4 Dead, Portal 2, and The Orange Box. The current generation of consoles are performing better than their predecessors, but interestingly, Valve has not yet released titles for these consoles.
"We get really frustrated working in walled gardens," said Newell at the media roundtable, hinting at how restricted Valve's development teams are when working on games for consoles compared to PC titles.
Source: Tech Times
Posted By CybrSlydr @ 7:24 AM
Thursday February 2nd, 2017
| Researchers have developed a new type of blue-phase liquid crystal that could allow for computers monitors, televisions, and other displays to cram significantly more pixels into the same space while using less power than what's needed to run today's displays, The Optical Society reports. One of the big upsides to this would be the potential for better VR headsets.
The idea of a blue-phase LCD is not a new one. Samsung built a prototype nearly a decade ago, though the technology hasn't gained much traction since then because it requires a lot of voltage and has a slow capacitor charging time.
Undeterred, a team of researchers at the University of Central Florida's College of Optics and Photonics (CREOL) worked with collaborators from LCD maker JNC Petrochemical in Japan and display maker AU Optronics in Taiwan.
Source: PC Gamer
Posted By CybrSlydr @ 5:17 PM
Wednesday January 25th, 2017
| The overarching goal is to make Windows 10 "the best operating system for games"—and critically, to make it more consistent, so that frame rates and performance are more predictable and uniform. Gamill said that when Game Mode is active, the operating system will tend to be biased toward allocating CPU and GPU resources to the game. The Creators Update will ship with a periodically updated whitelist of known games for automatically enabling Game Mode. Windows users will also be able to opt in and out of Game Mode on a game-by-game basis using the Win+G keyboard shortcut.
Source: http://arstechnica.com/gaming/2017/01/windows-10-game-mode-a-free-fps-boost-of-a-few-percent-with-more-to-come/]Ars Technica[/url]
Posted By FunkZ @ 1:05 PM
Thursday January 5th, 2017
| At AMD's Tech Summit in December, press, partners, and analysts were briefed on some of AMD's upcoming products; today we can finally talk about everything we saw. I've already talked a lot about Zen/Ryzen, but for gamers the bigger news is Vega. AMD gave us a roadmap last year listing their plans for GPU architectures: first Polaris, then Vega, and after that Navi. Polaris targeted the mainstream gaming audience, with good performance and efficiency, but Vega sets its sights higher, with a release target of "first half, 2017"—probably June, judging by AMD's past history.
Along with working silicon, AMD has now released the first official details on Vega, and it's shaping up to be, *ahem*, out of this world.
Vega builds on everything that makes Polaris great, but it's not simply a bigger chip with more cores. AMD didn't provide Vega's core count or clock speed, but it will likely be 4,096 cores clocked at around 1.5-1.6GHz. The reason I can be so specific is that AMD also announced a new line of machine intelligence accelerators, called Radeon Instinct MI6, MI8, and MI25. The MI25 uses Vega and will provide up to 25 TFLOPS (with FP16—half that for FP32), which means the baseline for Vega is about 45 percent faster than the Fury X. Chew on that for a minute—45 percent faster than Fury X should put it well above the performance level of the GTX 1080, possibly even eclipsing the Titan X (and thereby the 1080 Ti).
It's not just about TFLOPS, however. AMD has reworked several key elements of their GCN architecture, a major one being the memory subsystem. Vega includes 8GB (possibly 16GB) of HBM2 memory in two stacks. These deliver the same 512GB/s bandwidth as the four stacks of HBM1 in Fiji, but with two stacks the silicon interposer doesn't need to be as large, and HBM2 densities allow AMD to double (potentially quadruple) the amount of memory. We've seen quite a few instances where 4GB can limit performance, so Vega takes care of that problem.
But AMD isn't just calling this HBM or VRAM; it's now a "High-Bandwidth Cache" (HBC) and there's also a new "High-Bandwidth Cache Controller" (HBCC). The distinction is important, because the HBCC plays a much more prominent role in memory accesses. AMD calls this a "completely new memory hierarchy." That's probably a bit of hyperbole, but the idea is to better enable the GPU to work with large data sets, which is becoming an increasingly difficult problem.
As an example of why the HBCC is important, AMD profiled VRAM use for The Witcher 3 and Fallout 4. In both cases, the amount of VRAM allocated is around 2-3 times larger than the amount of VRAM actually 'touched' (accessed) during gameplay. The HBCC takes this into account, allowing the GPU to potentially work with significantly larger data sets, providing a 512TB virtual address space.
AMD also demonstrated a real-time physically rendered image of a house using more than 600GB of data, running on what I assume is an 8GB Vega card. If the HBCC works properly, even a 4GB card could potentially behave more like an 8-12GB VRAM card, while an 8GB card would equal a 16-24GB card.
Vega also has a new geometry pipeline. Similar to the VRAM use, AMD notes that there can be a 100X difference between polygons in a scene and those that are visible on the screen. To help, the new geometry engine will have over twice the throughput per clock compared to AMD's previous architecture. The compute unit is also improved, with native support for packed FP16 operations, which should prove very useful for machine learning applications. AMD's Mike Mantor also stated, "We've spent a lot of time tuning and tweaking to get our frequencies up—significantly—and power down," though note that the Radeon Instinct MI25 still has a "<300W" TDP.
Finally, AMD improved the pixel engine, with a new Draw Stream Binning Rasterizer that helps cull pixels that aren't visible in the final scene. All the render back-ends are also clients of the cache now, reducing the number of memory accesses (e.g., for when the pixel and shader pipelines both access the same texture). This should provide significant performance improvements with deferred rendering engines, which is what many modern games are using.
Based purely on the raw performance numbers, Vega would be impressive, but factor in the other changes and AMD's currently superior DX12/Vulkan performance, and we're looking at another exciting year in graphics cards. The GTX 1080 leads the Fury X by around 30 percent on average (less at 4K), so a 45 percent boost would put Vega well ahead, and if the architecture improvements can add another 10-15 percent Vega might even match or exceed Titan X. AMD has already demoed Doom running at 4K ultra and 65-75 fps (on a Ryzen system, no less), backing up that performance estimate.
For graphics junkies like me, June can't come soon enough.
Source: PC Gamer
Posted By CybrSlydr @ 4:51 PM
Friday December 23rd, 2016
| Steam’s Winter Sale went live yesterday, and the service has succumbed to what must have been a prolonged assault of shoppers trying to get the best deals on PC games. Steam, as of this writing, is completely down. Like, all of it.
We’ve contacted Valve to see if holiday traffic is the reason for the outage, as well as asking about a timetable for the return of normal operations. As we’re entering into the holiday weekend it’s very possible that, while engineers are likely working hard to make sure the servers get back up in a timely fashion, we cannot say the same for responses to press inquiries.
Your purchases will have to wait, and your voting for the Steam awards will have to take a backseat to the act of living the rest of your life. We hope for the return of Steam soon, or at least for some official communication from Valve about what is going on, and when we can expect it back.
Posted By CybrSlydr @ 11:13 AM
Thursday December 22nd, 2016
| They say ’tis the season for giving, and it looks like CD Projekt Red has been allowed to open one of their presents a few days before December 25: a cash bounty from the Polish government to the tune of 30 million zloty (that’s $7 million US dollars or £5.6 million Queen’s megapounds), according to a report from WCCFTech.
To make off with those hefty stacks from a total jackpot of 116 million zloty, the Witcher 3 developer submitted four proposals to the Polish National Center for Research of Development, along with one additional one relating to cross-platform development of GOG.
Let’s have a look at what the proposals were about, shall we, and then we can sit back and (perhaps excitedly, it is Christmas after all) speculate on what the studio could possibly be working on next:
City CreationElsewhere in the list of winners, Dying Light developer Techland saw their bank accounts fill up after promising a prototype of a first-person fantasy RPG, with money also reaching the coffers of CI Games, The Farm 51, and Bloober Team.
Comprehensive technology for the creation of “live”, playable in real-time, cities of great scale based on the principles of artificial intelligence and automation and taking into account the development of innovative processes and tools supporting the creation of high-quality open world games.
Comprehensive technology enables the creation of unique gameplay for many players, taking into account the search of opponents, session management, replication facilities, and support of a variety of game modes along with a unique set of dedicated tools.
Comprehensive technology for providing a unique, film quality RPG with open world, also taking into account innovative solutions Process and unique set of dedicated tools.
Comprehensive technology enabling a significant increase in quality and production of complex face and body animations for open world RPG games, also taking into account the innovative process solutions and a unique set of dedicated tools.
In a statement, CD Projekt Red boss Adam Kicinski said the resulting schemes would “enable Polish developers to carry out nearly 40 projects worth 191 million PLN.” Even without staring into our crystal ball (PLEASE LET IT BE CYBERPUNK 2077) and looking at the future, seeing this investment into our industry on the world stage gives me a warm, fuzzy feeling… or maybe that’s just all the glühwein kicking in.
Posted By CybrSlydr @ 9:54 AM
Tuesday November 29th, 2016
| I’ve had a go at Factorio [official site], and even managed to automate resource mining and production. I thought I had a good grip on the game until I saw what DaveMcW was able to create. Using just the components available in the base game, he managed to build what is essentially a video stream decoder and display program.
DaveMcW built a massive complex, composed of a display, memory, and decoder sections, then replicated it via Blueprint 10 times to make a 178×100 pixel display with a total of 34MB of memory. These might not seem like impressive numbers until you factor in that Factorio doesn’t have a ton of built-in code interpretation, so the whole thing was mainly coded in Assembly.
All of this hard work went into playing the music video for Darude’s “Sandstorm,” but it seems like any video could be played on the massive array of small display modules. “Sandstorm” just happens to be one of the best choices that could have been used here. If you’re looking to use DaveMcW’s design, or build something similar, he goes in-depth on how he created the video player on the Factorio forums.
Source: Rock, Paper, Shotgun
Posted By CybrSlydr @ 9:51 PM
Saturday November 19th, 2016
| You might have thought that when Asus debuted its water-cooled GX700 laptop last year that it would be a one-and-done design, but you'd be wrong. Asus is at it again, this time with the ROG GX800, a similar looking system that's even bigger and more powerful.
Instead of a 17-inch panel, Asus supersized the ROG GX800 with an 18.4-inch display. It still boasts a 4K (3840x2160) resolution, as anything less could be deemed silly on such a sensible system (just a bit of slight sarcasm there), along with 100 percent coverage of the RGB color space. Oh, and it supports G-Sync, too.
Underneath the massive hood is an Intel Core i7-6820HK processor that's begging to be overclocked and not one, but two GeForce GTX 1080 GPUs in SLI. Yeah, it's like that.
This is a no-compromise laptop. Well, except for the obvious—portability. It's big (458x338x454mm) and heavy (5.7kg) all on its own, but add the Hydro Overclocking System, which tacks on another 4.7kg, and you're looking at staying stationary for a spell.
The cooling system is the most unique thing about the GX800. It's essentially a liquid cooling dock that allows you to overclock the CPU, GPU, and RAM without fear of the thing cooking itself.
"With the Hydro Overclocking System, ROG GX800’s Intel K-SKU CPU can be overclocked to 4.2GHz so you get mind-blowing levels of performance. The graphics cards can be overclocked to 1961MHz, while VRAM and DRAM can be pushed up to 5,200MHz and 2,800MHz respectively," Asus claims.
Configurations will come with up to 64GB of DDR4-2800 RAM. It also supports up to three M.2 PCIe-based SSDs in RAID 0 and has built in 802.11ac Wi-Fi, Bluetooth 4.1, two USB 3.1 Type C ports, three USB 3.0 ports, separate microphone and headphone jacks, a GbE LAN port, HDMI and mini DisplayPort output, and a memory card reader.
Asus didn't say when the GX800 will be available or for how much, though with the GX700 selling for around $4,700, we suspect this one will top the $5,000 mark.
Source: PC Gamer
Posted By CybrSlydr @ 9:02 AM
Sunday October 16th, 2016
| Purchasing a new video game used to be simple. You’d go down to the local game store, slam sixty dollars on the counter, and bring home your brand new copy of Call of Battlefallverwatch 7: Multiversal Warfare. But as Willie Nelson once said, “the times they are a-changin.”
Now, each new big release boasts a million different collector’s editions with pre-order bonuses that barely fit inside the box, and they typically retail for a hundred dollars or more. Meanwhile, logging on to Steam or another digital distribution site gives you a chance of purchasing a AAA title for well below the sixty-dollar price point. Then there’s the indie-market, which has begun to offer innovative games for around ten to twenty dollars. It seems like pricing is becoming less and less standard as the gaming landscape becomes more and more complex.
Who sets these prices and, if the sixty-dollar game is really on its way out, what is preventing them from charging us even more in the future? To answer that, we have to examine why games were priced at sixty dollars to begin with.
And to understand that, we first have to look at the economics of video game retail.
Source: Game Crate
Posted By CybrSlydr @ 5:25 PM
Wednesday September 28th, 2016
| Hell yeah.
We need to learn a lesson about needless consumerism from this auto repair shop in Gdansk, Poland. Because it still uses a Commodore 64 to run its operations. Yes, the same Commodore 64 released 34 years ago that clocked in at 1 MHz and had 64 kilobytes of RAM. It came out in 1982, was discontinued in 1994, but it’s still used to run a freaking company in 2016. That’s awesome.
To be sure, small businesses around the world often use technology that’s a bit more outdated than what the rest of us use in our daily lives but ****, flexing a Commodore 64 for work in a time when babies are given smartphones before pacifiers is pretty **** bad ***.
Here’s what Commodore USA’s Facebook page wrote regarding the computer:
This C64C used by a small auto repair shop for balancing driveshafts has been working non-stop for over 25 years! And despite surviving a flood it is still going...I know where I’m going if my car ever breaks down in Poland.
Posted By CybrSlydr @ 9:16 PM
Wednesday September 7th, 2016
| Nvidia's done a good job so far of fleshing out its high-end and mid-range Pascal offerings, but what about gamers on a tighter budget? That's where the GeForce GTX 1050 will likely come into play. Word on the web is that it's bound for an October release with a spec sheet that's similar to Nvidia's previous generation GeForce GTX 950.
That's coming from the folks at Benchlife, a Chinese-language website that posted a CPU-Z screenshot of the card's specs. Assuming it's the real deal, the GTX 1050 will sport a GP107 GPU with 768 CUDA cores. Before we get into the other specs, let's have a look at the Pascal parts that are already out there.
According to the CPU-Z screenshot, the GeForce GTX 1050 will have up to 4GB of GDDR5 memory on a 128-bit wide bus. It will also feature 1316MHz (base) and 1380MHz (boost) clockspeeds, a 7Gbps memory clock, a texture fill rate of 84.2 GTexel/s, and 112.1GB/s of memory bandwidth.
- Titan X: GP102 (3,584 CUDA cores @ 1417MHz, 384-bit memory interface)
- GTX 1080: GP104 (2,560 CUDA cores @ 1607MHz, 256-bit memory interface)
- GTX 1070: GP104 (1,920 CUDA cores @ 1506MHz, 256-bit memory interface)
- GTX 1060 6GB: GP106 (1,280 CUDA cores @ 1506MHz, 192-bit memory interface)
- GTX 1060 3GB: GP106 (1,152 CUDA cores @ 1506MHz, 192-bit memory interface)
The CUDA count is the same as the GeForce GTX 950, but clockspeeds are faster—the GTX 950 has base and boost clocks of 1,024MHz and 1,188MHz, respectively, along with a 6,600 Gbps memory clock, 49.2 GTexel/s texture fill rate, and 105.6GB/s memory bandwidth.
In short, the GeForce GTX 1050 is a faster clocked GeForce GTX 950 with an upgraded GPU built on a 16nm manufacturing process. It will have a lower TDP at 75W compared to 90W, and won't require a PCI-E power cable, unless a third party deviates from the reference design.
There's no word on pricing, but based on the GTX 1060 3GB, we expect the GTX 1050 to target the $150 market, give or take.
Source: PC Gamer
Posted By CybrSlydr @ 1:09 PM