Friday June 24th, 2016
| With the annual Electronic Entertainment Expo once again upon us, this week has been a flood of gaming hardware and software news. On the PC front, AMD is once again sponsoring PC Gamer’s PC Gaming Show, and while the company isn’t making quite as large of a presence this year – having just announced a bunch of tech at Computex – AMD is still attending E3 to tease a bit of hardware. Announced in a press release that’s going out at the same time as the PC Gaming Show starts, AMD is very briefly teasing the next two Polaris-based Radeon cards: the Radeon RX 470 and the Radeon RX 460.
AMD previously teased the Radeon RX 480 back at Computex, and with that card not shipping until the end of this month, the RX 470 and RX 460 are even more brief teases, essentially amounting to AMD confirming that they will exist.
As you can assume from the numbers, the RX 470 and RX 460 will slot in below the $199 RX 480. AMD’s press release specifically notes that the RX 470 is a “refined, power-efficient HD gaming” card. Whereas the RX 460 is a “a cool and efficient solution for the ultimate e-sports gaming experience.” These are no further details such as performance, specifications, or pricing, so this is a true teaser in every sense of the word.
Posted By BlastMaster @ 9:36 PM
Tuesday June 14th, 2016
| Razer and Sensics, the two companies spearheading the Open Source Virtual Reality (OSVR) movement, just announced a second generation VR headset with an upgraded display that's supposedly on par with the Oculus Rift and HTC Vive. It's also several hundred dollars cheaper with an MSRP of $400 (€500).
"The HDK 2 allows us to meet the needs of VR fans and gamers and provide developers with affordable open-source hardware to innovate with," says Christopher Mitchel, OSVR Lead, Razer. "With the HDK 2 being able to deliver a visual experience on par with industry leaders, we will now be able to represent hardware agnostic VR media and games in all their glory for future headsets to adopt through the open source ecosystem."
Razer's on a mission to make VR accessible to a wider audience with an open source platform that doesn't lock anyone out. (If you're handy with a code editor, you can contribute to OSVR projects on GitHub.) Offering a lower cost headset in the form of the HDK (Hacker Development Kit) 2 ties in with that plan and, as far as Razer is concerned, brings parity to more expensive headsets offered by the competition.
Source: PC Gamer
Posted By CybrSlydr @ 9:14 AM
Tuesday May 31st, 2016
Intel just announced its first 10-core desktop CPU, the Core i7-6950X Extreme Edition... Source: Engadget
But be prepared to pay through the nose for the privilege of owning it, as the 10-core i7 Extreme Edition will cost $1,723
Finally, I've been waiting for these to upgrade my 2600k, but :wowzers that price tag.
These should see some massive benches from the modding community with the 40 PCI lanes.
Posted By Spartacus @ 12:27 AM
Tuesday May 24th, 2016
| If you're hoping to plug Pascal into your water cooling setup, good news, EK Water Blocks has a new full-cover liquid cooling solution (EK-FC1080 GTX) for Nvidia's GeForce GTX 1080.
The new water block comes in four different variations, including two transparent Plexiglas variants (EK-FC1080 GTX and EK-FC1080 GTX - Nickel) and two acetal models (EK-FC1080 GTX - Acetal and EK-FC1080 GTX Acetal+Nickel). EK says there's no difference in performance between models, only the aesthetics. All four cover the entire PCB of the card and apply cooling directly to the GPU, RAM, and VRM (voltage regulation module).
EK went with a central inlet split-flow cooling engine design, same as with its flagship EK-Supremacy Evo CPU water block. According to EK, this type of heat exchanger works just as well with reversed water flow. In addition, EK says it's a solid option for liquid cooling solutions with weaker water pumps.
The base of the new coolers are made from electrolytic copper or nickel-plated electrolytic copper, while the top consists of POM Acetal or acrylic (depending on the model). They all come with pre-installed brass standoffs.
Nvidia's GeForce GTX 1080 is already the fastest single GPU solution on the planet, though if you plan to overclock, water cooling could lift the ceiling. In our tests using the stock cooling solution, we were able to goose the core and more by about 15 percent above reference (see our review for more in-depth coverage).
The new water blocks will be available on Friday priced between €100 (about $111) and €110 (about $123). Retention backplates, which also cool the memory ICs on the backside of the PCB, will be available for €30 (about $33) to €38 (about $42).
Source: PC Gamer
Posted By CybrSlydr @ 7:35 PM
Tuesday May 17th, 2016
| There can be only one
Earlier this month, Nvidia officially revealed the name and a few core specs for their next generation GPU, the GTX 1080. They invited a bunch of press and gamers to the unveiling party, showed off the hardware, claimed performance would beat the Titan X by about 30 percent, and then told us the launch date of May 27 and the price of $599. Then they dropped the mic and walked off the stage. Or at least, that's how it could have gone down.
What really happened is that we had several hours over the next day to try out some demos and hardware, all running on the GTX 1080. We were also given a deep dive into the Pascal GP104 architecture at the heart of the graphics card, and we left for home with a shiny new GTX 1080 box tucked safely in our luggage. And then we waited a few days for Nvidia to ship us drivers, benchmarked a bunch of games, and prepared for today, the day where we can officially talk performance, architecture, and some other new features.
If you're not really concerned about what makes the GTX 1080 tick, feel free to skip down about 2000 words to the charts, where we'll show performance against the current crop of graphics cards. Here's a spoiler: the card is **** fast, and even at the Founders Edition price of $699, it's still extremely impressive. For those who want more information, we've previously discussed the initial details of the GTX 1080, some of the new features and software, and explained the Founders Edition. Here's the 'brief' summary.
Source: PC Gamer
Posted By CybrSlydr @ 10:04 AM
Sunday May 8th, 2016
| NVIDIA gave us a taste of its new Pascal architecture with the P100 graphics card last month, which is aimed at servers for heavy duty computing. Now, it's ready to show off how that technology will be adapted for consumers with its new GeForce GTX 1080 GPU. As you'd expect, it's fast: NVIDIA CEO Jen-Hsun Huang revealed that it's faster than its current performance king, the $1,000 Titan X, as well as three times as power efficient. That's particularly impressive since it's the successor to NVIDIA's GTX 980, which retails for around $600.
It's been a day and no mention of this?
The GTX 1080 and GTX 1070 have been unveiled and price points have been set. With the GTX 1080 set to be priced at $599 USD and the GTX 1070 at $379. With both cads claimed by nvidia to have better performance than the GTX Titian X and a much lower TDP. It's gonna be interesting to see just what these cards can do when the GTX 1080 arrives on shelves at the end of May and the GTX 1070 set for early June.
Posted By N64link @ 3:32 AM
Monday April 25th, 2016
| John Romero and fellow id Software co-founder Adrian Carmack proudly announce BLACKROOM™, a visceral, varied and violent shooter that harkens back to classic FPS play with a mixture of exploration, speed, and intense, weaponized combat. Use fast, skillful movement to dodge enemy attacks, circle-strafe your foes, and rule the air as you rocket jump in the single- and multiplayer modes. BLACKROOM launches with unique multiplayer maps and robust modding support for the community to make diabolical creations of their own design - Coming Winter 2018 to PC!
BLACKROOM is the FPS you have been waiting for: a return to fast, violent and masterful play on the PC. In BLACKROOM, you reign supreme in a variety of multiplayer modes, including co-op, 1-on-1 deathmatch and free-for-all arena in a motley mix of locations including hardcore military sims, hellish infernos and interstellar space. If you prefer a single-player experience, delve into an intense 10+ hour campaign, spanning wildly varied environments, from ruined Victorian mansions to Wild West ghost towns to treacherous pirate galleons and beyond.
- Platform - PC (DRM Free + Steam) and Mac
- Release Date - Winter 2018
- Genre - FPS
- Single-Player Campaign - 10 Hours, Leaderboard Challenge Modes
- Multiplayer - Co-op, 1-on-1 Deathmatch, Arena
- Multiplayer Maps - 6 Built In + Community Maps
- Fully Moddable, Run Dedicated Servers, Create Maps
- New Soundtrack by acclaimed metal guitarist George Lynch
BLACKROOM is the FPS you know we can make. Master fast, skillful movement with rocket jumping, strafe jumping and circle strafing. Wield intricately balanced weapons where each one has a specific use and does the damage that makes you feel good. Challenge yourself with expert abstract level design, invented and perfected by John Romero and fully realized by Adrian Carmack’s dark and unique style. Master six built-in multiplayer maps, as well as countless maps created by the community. In BLACKROOM, Romero is designing every level.
BLACKROOM is what the FPS community asked for. Community is and has always been at the core of FPS, and BLACKROOM allows for an incredible range of modding opportunities. Beyond the levels in the game, extend your experience with full mod support (no additional DLC or subscriptions) and dedicated servers. Put your skills to the test in Challenge Modes (speedrunning and more) that present unique and demanding goals.
BLACKROOM is unique because it is shifting. Change your environment from within the game with the proprietary Boxel, a device only allocated to HOXAR engineers. Influence the environment, your weapons and your enemies.
BLACKROOM is metal. It features a new soundtrack and compositions by acclaimed metal guitarist George Lynch, frequently cited as one of the best metal guitarists in the world.
Posted By CybrSlydr @ 10:25 AM
Friday April 22nd, 2016
| By Matt Porter It seems as though Valve is getting ready to accept payment for Steam purchases via digital currency bitcoin.
PCGamesN first reported the news, with screenshots from an announcement on the private developer forums on Steam starting to appear.
"We're using an external payment provider to process bitcoin payments to help partners reach more customers on Steam," reads the announcement (via Reddit). "If customers choose to pay via bitcoin, they'll still be charged the price already set in the local currency."
The payment processor will convert the payment amount into traditional currency, so that Valve will never actually be handling the bitcoin.
The announcement tells developers they don't need to take any action. "There is no need to set a bitcoin price or keep track of bitcoin valuation. The purchase price of your product does not change."
These reports are unconfirmed so far. If we hear anything official from Valve, we'll be sure to let you know.
Microsoft has been accepting bitcoin for content from its online stores since the end of 2014. You probably shouldn't give a robot bitcoin to spend though, because it'll buy drugs.
Posted By CybrSlydr @ 12:20 PM
Friday April 8th, 2016
| By Alex Osborn Just last month, Microsoft announced plans to allow cross-network play over Xbox Live, and according ID@Xbox European boss Agostino Simonetta, the company is now ready for any developer who wants to take advantage of the feature.
"Absolutely, we're ready," Simonetta told Eurogamer at EGX Rezzed when asked if the technology is currently in place. "Any title that wants to update their game to include cross-network play, any title that wants to launch soon and take advantage of that, we are ready."
Rocket League is the first title to take advantage of the feature, though Simonetta couldn't provide a specific date as to when Xbox One owners might be able to play against those on PS4, saying, "it's always up to the developer to decide. We issued an open invitation."
The Xbox exec went on to further emphasize the network infrastructure is there and the invitation is open to anyone interested. "We've made the announcement and we're ready - whoever wants to get on board," he added. "It remains an open invitation to any network that wants to do the same."
Whether or not we'll see cross-network support between Xbox One and PlayStation 4 for major third-party titles, however, remains very much up in the air, as Sony offered a cagey response when asked about working with Microsoft.
Posted By CybrSlydr @ 12:07 AM
Tuesday April 5th, 2016
| We've known the name of Nvidia's next generation architecture for some time now: Pascal. Everything beyond that has largely consisted of speculation—some of it reasonable, and some of it pie-in-the-sky dreaming. Today at Jen-Hsun's keynote for GTC2016, Nvidia has revealed some of the first details of the hardware. If you were hoping to see the GPU launch first for consumers, followed by professional versions later, we're still waiting to see how that plays out. For now, Nvidia is talking a few higher level details, and the halo P100 product is shaping up to be an absolute monster.
What you need to understand first is that P100 is apparently going "all in" on deep learning, which may or may not see use limited to Tesla and Quadro products. Things like NVLink—a high-speed bus linking multiple GPUs together—won't necessarily be used or needed in the world of PC gaming, but even if Pascal is focused more on deep learning and supercomputing applications, that doesn't mean it won't be a killer gaming chip. Let's start with what we know about Pascal P100.
If the above image looks a bit reminiscent of AMD's Fiji processors, there's good reason. Like Fiji, Nvidia is tapping HBM (High-Bandwidth Memory) for the P100, only they're using HBM2 instead of HBM1. The net result is four layers of stacked memory running on a 4096-bit bus, only the memory this time is running at 1.4Gbps instead of 1.0Gbps, yielding a total memory bandwidth of 720GB/s. That's all well and good, but perhaps more important than simply providing tons of memory bandwidth, HBM2 significantly increases the amount of memory per HBM stack, with P100 sporting a total of 16GB of VRAM. This was obviously a critical factor for Tesla cards, considering the older Tesla K40 already had 12GB of memory, and M40 likewise supports 12GB—not to mention the newly released M40 that comes with 24GB of GDDR5. HBM2 also includes "free" ECC protection, which is a plus for professional applications where reliability and accuracy are paramount.
Thanks to the move to the 16nm FinFET process technology, Nvidia has also been able to substantially increase the number of transistors in the GPU core. Where GM200 in the M40 has 3072 CUDA cores and consists of eight billion transistors, P100 nearly doubles transistor counts to 15.3 billion. Nvidia also noted that this is their largest GPU ever, measuring 610mm2, but while that's impressive, GM200 also measured around 600mm2, so that aspect hasn't changed too much. That size does not include the silicon interposer, however, which has to cover the area of both the GPU as well as the HBM2 chips, so this definitely qualifies as a gargantuan chip. If you count all the transistors in the GPU, interposer, and HBM2 modules, Nvidia says there are 150 billion transistors all told.
What about core counts? Here's where things get a bit interesting. The Pascal architecture has once again evolved, changing the SM module size. In Kepler, a single SMX consisted of 192 CUDA cores, with the GK110 supporting up to 28 SMX units for 2880 CUDA cores total. Maxwell dropped the core count to 128 per SM, but the architecture was built to better utilize each core, leading to improved efficiency. In Pascal P100, Nvidia drops to just 64 CUDA cores per SM, and apparently there are further improvements to efficiency. What's interesting to note is that each SM in the P100 has 64 FP32 cores, along with 32 FP64 cores, and P100 also adds support for half-precision FP16, potentially doubling throughput in situations where raw performance takes priority over precision.
A fully enabled P100 has 60 SMs, giving a potential 3840 cores, but Tesla P100 disables four SMs to give 3584 total cores. That might sound like only a small step forward, considering the M40 has 3072 cores, but clock speeds have improved. Where M40 runs at 948-1114MHz, P100 can run at 1328-1480MHz. Raw compute power ends up being 21.2 half-precision FP16 TFLOPS, 10.6 single-precision FP32 TFLOPS, or 5.3 double-precision FP64 TFLOPS. M40 by comparison had half- and single-precision rates of 6.8 TFLOPS, but double precision rates of just 213 GFLOPS; that's because GM200 only included four FP64 cores per SMM, a significant departure from the GK110 Kepler architecture.
What all this means is that P100 may never be utilized in a mainstream consumer device. At best, I suspect we might see some new variant of Titan based off P100 in the future, but that could be a long way off. You see, even though Nvidia is spilling the beans on Tesla P100 today—or at least, some of the beans—and the chips are in volume production, Nvidia doesn't plan on full retail availability from OEMs until Q1'2017. That means we're far more likely to see a GP104 chip that skips all the ECC, HBM2, and FP64 stuff and potentially stuffs more FP32 cores into a smaller die than P100. Sadly, Nvidia is not commenting on any future consumer facing products at this time. Looks like we'll have to wait for Computex to hear more about the consumer lines.
Source: PC Gamer
Posted By CybrSlydr @ 9:38 PM
Sunday April 3rd, 2016
| The next-generation architecture will be a significant leap and here’s what we know so far
Published By: Jawwad Iqbal on April 3, 2016 09:58 am EST
GPU Technology Conference will be held next week on April 5, and one of the interesting things to look forward is NVIDIA Corp’s (NASDAQ:NVDA) media briefing that will be delivered by CEO Jen Hsun Huang. The firm is expected to showcase the first ever demo of its upcoming next-generation GPU architecture, Pascal. It will go head-to-head against Advanced Micro Devices’ (NASDAQ:AMD) Polaris GPU architecture in 2016.
The GPU architecture will mark NVIDIA’s transition from 28 nanometer fabrication process down to 16 nanometer FinFET. This shift will allow Pascal to have significant power efficiency over the current Maxwell architecture, which currently stands as the most power-efficient series based on 28nm. Expect to see lower power requirements and compact GPUs with the upcoming lineup. NVIDIA has opted to continue relying on Taiwan Semiconductor Manufacturing Company (TSMC) for manufacturing the 16nm nodes.
Pascal architecture will finally introduce NVIDIA GPUs to the faster high-bandwidth memory (or 3D memory) that will allow much greater bandwidth over the current GDDR5 memory. Since 3D memory is stacked on the GPU package, data transfer speed is significantly increased and bandwidths up to 1TB/s will be achievable on a 4096-bit wide bus channel, all while delivering four times the power efficiency over GDDR5 and allowing twice as much memory to be packaged with the GPU. The flagship GP100 based entry is expected to boast 16GB VRAM.
Pascal’s Unified Memory will enable CPU to GPU and GPU to CPU interconnectivity, leading to faster data transfers and reduce redundancies. To bridge this, NVIDIA is incorporating what it’s calling NVLink, the purpose of which is to break free from the limitations of PCI-E to provide higher bandwidth path between the GPU and CPU.
The first two GPUs that will serve as replacements for the current GTX 970 and GTX 980 are expected to be unveiled at the event. There is no shortage of rumors on the possibilities the event will undersee. A more recent report revealed NVIDIA’s plans to launch the two GPUs at Computex 2016 in May. Assumptions and rumors point to the possibility of GDDR5X utilization in the GTX 1070 and 1080, while the promised 3D Memory will make its way to the higher end GP100 based GTX 980 Ti and GTX Titan replacement.
GDDR5X will provide double the bandwidth over GDDR5. At 256-bit bus, it can outperform GDDR5 with a 384-bit bus while consuming less power. So it should be a fine adjustment for the high-end tiers. It makes sense for NVIDIA to follow this path and introduce HBM in the higher end segment where cost is not an immediate concern.
The naming scheme is something yet to be seen. Rumors have used the GTX 1070 and GTX 1080 naming scheme, which was also backed by an earlier leak that revealed the alleged cooling shrouds of the two GPUs. On the other hand, we also came to know that they will be referred to as GTX X70 and GTX X80.
Whatever the name NVIDIA decides or whatever they decide to show first at the event, we won’t have to wait too long to find out.
Posted By CybrSlydr @ 3:48 PM
Thursday March 31st, 2016
| Power that's not painfully expensive
Unless you've had your head under a rock for the past six months, you probably already know that virtual reality headsets--namely the Oculus Rift and the HTC Vive--are the new hotness when it comes to PC peripherals. VR is all over the media, but what keeps getting repeated over and over is that you'll need a "high-end" PC to play games on the Rift or Vive.
I'll be the first to say that the term "high-end" is relative. For PC gamers, high-end means i7 processors and graphics cards like the GTX 980 Ti and the R9 Fury. While such high-end parts will give you the best experience in VR, you can have an enjoyable experience without a $650 GPU, just like other games. So I set out to build a rig with the lowest-priced CPU and GPU that are certified to work with the Rift.
Source: PC Gamer
Posted By CybrSlydr @ 8:43 PM
| Jason Rubin is thinking a lot about whether or not we'll feel comfortable while taking a walk. The scope of the former THQ president's job is as big as it ever was, but as head of Oculus' game development group, the problems he's talking about seem quaint in comparison to that struggle: exciting blue sky futuristic stuff that it's still a bit hard to believe is real. For instance, how are we going to move around virtual reality worlds while sat on our butts? Will it make us lose our lunch?
During a recent visit to Oculus HQ, we spoke to Rubin at length about his development teams and the third-party Rift devs who are solving entirely new problems in the medium. In this primordial stage, new guidelines are being introduced and struck down regularly. Everything is new, even things we take for granted in non-VR games. For instance, before we even get to movement: How do you represent the player's body, or another player's body? There's no single correct solution, and when what you're making is the first of its kind, there's no way to know for sure if your chosen method will work.
"It's going to take a long time for us to get to the point where we're iterative as opposed to revolutionary," says Rubin. "So we have these hand tracked devices now, right, all of the VR headsets. I'm looking at you, you're in VR, I'm in VR, I've got three points of information about what you're doing. I have head rotation and position, hand rotations and positions. I want your whole body. I don't know anything about your feet. How do I make that look like a human and not like some weird marionette that's kind of stretched amongst it?"
Source: PC Gamer
Posted By CybrSlydr @ 10:03 AM
Wednesday March 23rd, 2016
| For a nearly ten years now, Intel has been on a "tick-tock" processor design schedule. Intel would release a new manufacturing process, increasing the performance and power—this was the tick. Then it would introduce a new processor architecture that improved the efficiency and added new features—the tock.
Now though, according to The Motley Fool, this model is being scrapped in favor of a new, three-step one. In Intel's most recent 10-K filing (an annual report to the US Securies and Exchange commission), it states "We expect to lengthen the amount of time we will utilize our 14 [nanometer] and out next-generation 10 [nanometer] process technologies, further optimizing our products and process technologies while meeting the yearly market cadence for product introductions."
There's also a handy image to show the differences in the two methodologies.
As pointed out by Legit Reviews, the tick-tock model has already been on the way out. Haswell came as a sort of "semi-tock", and Intel has announced that Kaby Lake will be "refreshing" Skylake. The previously announced 10-nanometer Cannonlake is coming in 2017, Ice Lake is coming in 2018, and this will be refreshed by Tiger Lake in 2019. So we're already seeing the new three step method of Process, Architecture, Optimization being put into action.
Source: PC Gamer
Posted By CybrSlydr @ 3:17 PM
Tuesday March 15th, 2016
| Every year, I'm tempted to buy a Razer Blade gaming notebook. I haven't yet. Though Razer is the only company consistently making a high-quality, high-performance ultraportable laptop, the high price has always held me back. I just can't bring myself to pay $2,000+ for a computer that won't run next year's games well.
But Razer's new Blade has an answer to my conundrum. Just like the 12.5-inch Razer Blade Stealth that wowed us last month, the new 14-inch Blade is effectively future-proof. If you need more graphical horsepower -- say, in a year or four -- you'll be able to buy a Thunderbolt 3 docking station that adds the full muscle of a desktop graphics card. It lets you easily swap in a new graphics card, whenever you like, without even needing a screwdriver.
Posted By CybrSlydr @ 10:13 AM