Monday April 25th, 2016
| John Romero and fellow id Software co-founder Adrian Carmack proudly announce BLACKROOM™, a visceral, varied and violent shooter that harkens back to classic FPS play with a mixture of exploration, speed, and intense, weaponized combat. Use fast, skillful movement to dodge enemy attacks, circle-strafe your foes, and rule the air as you rocket jump in the single- and multiplayer modes. BLACKROOM launches with unique multiplayer maps and robust modding support for the community to make diabolical creations of their own design - Coming Winter 2018 to PC!
BLACKROOM is the FPS you have been waiting for: a return to fast, violent and masterful play on the PC. In BLACKROOM, you reign supreme in a variety of multiplayer modes, including co-op, 1-on-1 deathmatch and free-for-all arena in a motley mix of locations including hardcore military sims, hellish infernos and interstellar space. If you prefer a single-player experience, delve into an intense 10+ hour campaign, spanning wildly varied environments, from ruined Victorian mansions to Wild West ghost towns to treacherous pirate galleons and beyond.
- Platform - PC (DRM Free + Steam) and Mac
- Release Date - Winter 2018
- Genre - FPS
- Single-Player Campaign - 10 Hours, Leaderboard Challenge Modes
- Multiplayer - Co-op, 1-on-1 Deathmatch, Arena
- Multiplayer Maps - 6 Built In + Community Maps
- Fully Moddable, Run Dedicated Servers, Create Maps
- New Soundtrack by acclaimed metal guitarist George Lynch
BLACKROOM is the FPS you know we can make. Master fast, skillful movement with rocket jumping, strafe jumping and circle strafing. Wield intricately balanced weapons where each one has a specific use and does the damage that makes you feel good. Challenge yourself with expert abstract level design, invented and perfected by John Romero and fully realized by Adrian Carmack’s dark and unique style. Master six built-in multiplayer maps, as well as countless maps created by the community. In BLACKROOM, Romero is designing every level.
BLACKROOM is what the FPS community asked for. Community is and has always been at the core of FPS, and BLACKROOM allows for an incredible range of modding opportunities. Beyond the levels in the game, extend your experience with full mod support (no additional DLC or subscriptions) and dedicated servers. Put your skills to the test in Challenge Modes (speedrunning and more) that present unique and demanding goals.
BLACKROOM is unique because it is shifting. Change your environment from within the game with the proprietary Boxel, a device only allocated to HOXAR engineers. Influence the environment, your weapons and your enemies.
BLACKROOM is metal. It features a new soundtrack and compositions by acclaimed metal guitarist George Lynch, frequently cited as one of the best metal guitarists in the world.
Posted By CybrSlydr @ 10:25 AM
Friday April 22nd, 2016
| By Matt Porter It seems as though Valve is getting ready to accept payment for Steam purchases via digital currency bitcoin.
PCGamesN first reported the news, with screenshots from an announcement on the private developer forums on Steam starting to appear.
"We're using an external payment provider to process bitcoin payments to help partners reach more customers on Steam," reads the announcement (via Reddit). "If customers choose to pay via bitcoin, they'll still be charged the price already set in the local currency."
The payment processor will convert the payment amount into traditional currency, so that Valve will never actually be handling the bitcoin.
The announcement tells developers they don't need to take any action. "There is no need to set a bitcoin price or keep track of bitcoin valuation. The purchase price of your product does not change."
These reports are unconfirmed so far. If we hear anything official from Valve, we'll be sure to let you know.
Microsoft has been accepting bitcoin for content from its online stores since the end of 2014. You probably shouldn't give a robot bitcoin to spend though, because it'll buy drugs.
Posted By CybrSlydr @ 12:20 PM
Friday April 8th, 2016
| By Alex Osborn Just last month, Microsoft announced plans to allow cross-network play over Xbox Live, and according ID@Xbox European boss Agostino Simonetta, the company is now ready for any developer who wants to take advantage of the feature.
"Absolutely, we're ready," Simonetta told Eurogamer at EGX Rezzed when asked if the technology is currently in place. "Any title that wants to update their game to include cross-network play, any title that wants to launch soon and take advantage of that, we are ready."
Rocket League is the first title to take advantage of the feature, though Simonetta couldn't provide a specific date as to when Xbox One owners might be able to play against those on PS4, saying, "it's always up to the developer to decide. We issued an open invitation."
The Xbox exec went on to further emphasize the network infrastructure is there and the invitation is open to anyone interested. "We've made the announcement and we're ready - whoever wants to get on board," he added. "It remains an open invitation to any network that wants to do the same."
Whether or not we'll see cross-network support between Xbox One and PlayStation 4 for major third-party titles, however, remains very much up in the air, as Sony offered a cagey response when asked about working with Microsoft.
Posted By CybrSlydr @ 12:07 AM
Tuesday April 5th, 2016
| We've known the name of Nvidia's next generation architecture for some time now: Pascal. Everything beyond that has largely consisted of speculation—some of it reasonable, and some of it pie-in-the-sky dreaming. Today at Jen-Hsun's keynote for GTC2016, Nvidia has revealed some of the first details of the hardware. If you were hoping to see the GPU launch first for consumers, followed by professional versions later, we're still waiting to see how that plays out. For now, Nvidia is talking a few higher level details, and the halo P100 product is shaping up to be an absolute monster.
What you need to understand first is that P100 is apparently going "all in" on deep learning, which may or may not see use limited to Tesla and Quadro products. Things like NVLink—a high-speed bus linking multiple GPUs together—won't necessarily be used or needed in the world of PC gaming, but even if Pascal is focused more on deep learning and supercomputing applications, that doesn't mean it won't be a killer gaming chip. Let's start with what we know about Pascal P100.
If the above image looks a bit reminiscent of AMD's Fiji processors, there's good reason. Like Fiji, Nvidia is tapping HBM (High-Bandwidth Memory) for the P100, only they're using HBM2 instead of HBM1. The net result is four layers of stacked memory running on a 4096-bit bus, only the memory this time is running at 1.4Gbps instead of 1.0Gbps, yielding a total memory bandwidth of 720GB/s. That's all well and good, but perhaps more important than simply providing tons of memory bandwidth, HBM2 significantly increases the amount of memory per HBM stack, with P100 sporting a total of 16GB of VRAM. This was obviously a critical factor for Tesla cards, considering the older Tesla K40 already had 12GB of memory, and M40 likewise supports 12GB—not to mention the newly released M40 that comes with 24GB of GDDR5. HBM2 also includes "free" ECC protection, which is a plus for professional applications where reliability and accuracy are paramount.
Thanks to the move to the 16nm FinFET process technology, Nvidia has also been able to substantially increase the number of transistors in the GPU core. Where GM200 in the M40 has 3072 CUDA cores and consists of eight billion transistors, P100 nearly doubles transistor counts to 15.3 billion. Nvidia also noted that this is their largest GPU ever, measuring 610mm2, but while that's impressive, GM200 also measured around 600mm2, so that aspect hasn't changed too much. That size does not include the silicon interposer, however, which has to cover the area of both the GPU as well as the HBM2 chips, so this definitely qualifies as a gargantuan chip. If you count all the transistors in the GPU, interposer, and HBM2 modules, Nvidia says there are 150 billion transistors all told.
What about core counts? Here's where things get a bit interesting. The Pascal architecture has once again evolved, changing the SM module size. In Kepler, a single SMX consisted of 192 CUDA cores, with the GK110 supporting up to 28 SMX units for 2880 CUDA cores total. Maxwell dropped the core count to 128 per SM, but the architecture was built to better utilize each core, leading to improved efficiency. In Pascal P100, Nvidia drops to just 64 CUDA cores per SM, and apparently there are further improvements to efficiency. What's interesting to note is that each SM in the P100 has 64 FP32 cores, along with 32 FP64 cores, and P100 also adds support for half-precision FP16, potentially doubling throughput in situations where raw performance takes priority over precision.
A fully enabled P100 has 60 SMs, giving a potential 3840 cores, but Tesla P100 disables four SMs to give 3584 total cores. That might sound like only a small step forward, considering the M40 has 3072 cores, but clock speeds have improved. Where M40 runs at 948-1114MHz, P100 can run at 1328-1480MHz. Raw compute power ends up being 21.2 half-precision FP16 TFLOPS, 10.6 single-precision FP32 TFLOPS, or 5.3 double-precision FP64 TFLOPS. M40 by comparison had half- and single-precision rates of 6.8 TFLOPS, but double precision rates of just 213 GFLOPS; that's because GM200 only included four FP64 cores per SMM, a significant departure from the GK110 Kepler architecture.
What all this means is that P100 may never be utilized in a mainstream consumer device. At best, I suspect we might see some new variant of Titan based off P100 in the future, but that could be a long way off. You see, even though Nvidia is spilling the beans on Tesla P100 today—or at least, some of the beans—and the chips are in volume production, Nvidia doesn't plan on full retail availability from OEMs until Q1'2017. That means we're far more likely to see a GP104 chip that skips all the ECC, HBM2, and FP64 stuff and potentially stuffs more FP32 cores into a smaller die than P100. Sadly, Nvidia is not commenting on any future consumer facing products at this time. Looks like we'll have to wait for Computex to hear more about the consumer lines.
Source: PC Gamer
Posted By CybrSlydr @ 9:38 PM
Sunday April 3rd, 2016
| The next-generation architecture will be a significant leap and here’s what we know so far
Published By: Jawwad Iqbal on April 3, 2016 09:58 am EST
GPU Technology Conference will be held next week on April 5, and one of the interesting things to look forward is NVIDIA Corp’s (NASDAQ:NVDA) media briefing that will be delivered by CEO Jen Hsun Huang. The firm is expected to showcase the first ever demo of its upcoming next-generation GPU architecture, Pascal. It will go head-to-head against Advanced Micro Devices’ (NASDAQ:AMD) Polaris GPU architecture in 2016.
The GPU architecture will mark NVIDIA’s transition from 28 nanometer fabrication process down to 16 nanometer FinFET. This shift will allow Pascal to have significant power efficiency over the current Maxwell architecture, which currently stands as the most power-efficient series based on 28nm. Expect to see lower power requirements and compact GPUs with the upcoming lineup. NVIDIA has opted to continue relying on Taiwan Semiconductor Manufacturing Company (TSMC) for manufacturing the 16nm nodes.
Pascal architecture will finally introduce NVIDIA GPUs to the faster high-bandwidth memory (or 3D memory) that will allow much greater bandwidth over the current GDDR5 memory. Since 3D memory is stacked on the GPU package, data transfer speed is significantly increased and bandwidths up to 1TB/s will be achievable on a 4096-bit wide bus channel, all while delivering four times the power efficiency over GDDR5 and allowing twice as much memory to be packaged with the GPU. The flagship GP100 based entry is expected to boast 16GB VRAM.
Pascal’s Unified Memory will enable CPU to GPU and GPU to CPU interconnectivity, leading to faster data transfers and reduce redundancies. To bridge this, NVIDIA is incorporating what it’s calling NVLink, the purpose of which is to break free from the limitations of PCI-E to provide higher bandwidth path between the GPU and CPU.
The first two GPUs that will serve as replacements for the current GTX 970 and GTX 980 are expected to be unveiled at the event. There is no shortage of rumors on the possibilities the event will undersee. A more recent report revealed NVIDIA’s plans to launch the two GPUs at Computex 2016 in May. Assumptions and rumors point to the possibility of GDDR5X utilization in the GTX 1070 and 1080, while the promised 3D Memory will make its way to the higher end GP100 based GTX 980 Ti and GTX Titan replacement.
GDDR5X will provide double the bandwidth over GDDR5. At 256-bit bus, it can outperform GDDR5 with a 384-bit bus while consuming less power. So it should be a fine adjustment for the high-end tiers. It makes sense for NVIDIA to follow this path and introduce HBM in the higher end segment where cost is not an immediate concern.
The naming scheme is something yet to be seen. Rumors have used the GTX 1070 and GTX 1080 naming scheme, which was also backed by an earlier leak that revealed the alleged cooling shrouds of the two GPUs. On the other hand, we also came to know that they will be referred to as GTX X70 and GTX X80.
Whatever the name NVIDIA decides or whatever they decide to show first at the event, we won’t have to wait too long to find out.
Posted By CybrSlydr @ 3:48 PM
Thursday March 31st, 2016
| Power that's not painfully expensive
Unless you've had your head under a rock for the past six months, you probably already know that virtual reality headsets--namely the Oculus Rift and the HTC Vive--are the new hotness when it comes to PC peripherals. VR is all over the media, but what keeps getting repeated over and over is that you'll need a "high-end" PC to play games on the Rift or Vive.
I'll be the first to say that the term "high-end" is relative. For PC gamers, high-end means i7 processors and graphics cards like the GTX 980 Ti and the R9 Fury. While such high-end parts will give you the best experience in VR, you can have an enjoyable experience without a $650 GPU, just like other games. So I set out to build a rig with the lowest-priced CPU and GPU that are certified to work with the Rift.
Source: PC Gamer
Posted By CybrSlydr @ 8:43 PM
| Jason Rubin is thinking a lot about whether or not we'll feel comfortable while taking a walk. The scope of the former THQ president's job is as big as it ever was, but as head of Oculus' game development group, the problems he's talking about seem quaint in comparison to that struggle: exciting blue sky futuristic stuff that it's still a bit hard to believe is real. For instance, how are we going to move around virtual reality worlds while sat on our butts? Will it make us lose our lunch?
During a recent visit to Oculus HQ, we spoke to Rubin at length about his development teams and the third-party Rift devs who are solving entirely new problems in the medium. In this primordial stage, new guidelines are being introduced and struck down regularly. Everything is new, even things we take for granted in non-VR games. For instance, before we even get to movement: How do you represent the player's body, or another player's body? There's no single correct solution, and when what you're making is the first of its kind, there's no way to know for sure if your chosen method will work.
"It's going to take a long time for us to get to the point where we're iterative as opposed to revolutionary," says Rubin. "So we have these hand tracked devices now, right, all of the VR headsets. I'm looking at you, you're in VR, I'm in VR, I've got three points of information about what you're doing. I have head rotation and position, hand rotations and positions. I want your whole body. I don't know anything about your feet. How do I make that look like a human and not like some weird marionette that's kind of stretched amongst it?"
Source: PC Gamer
Posted By CybrSlydr @ 10:03 AM
Wednesday March 23rd, 2016
| For a nearly ten years now, Intel has been on a "tick-tock" processor design schedule. Intel would release a new manufacturing process, increasing the performance and power—this was the tick. Then it would introduce a new processor architecture that improved the efficiency and added new features—the tock.
Now though, according to The Motley Fool, this model is being scrapped in favor of a new, three-step one. In Intel's most recent 10-K filing (an annual report to the US Securies and Exchange commission), it states "We expect to lengthen the amount of time we will utilize our 14 [nanometer] and out next-generation 10 [nanometer] process technologies, further optimizing our products and process technologies while meeting the yearly market cadence for product introductions."
There's also a handy image to show the differences in the two methodologies.
As pointed out by Legit Reviews, the tick-tock model has already been on the way out. Haswell came as a sort of "semi-tock", and Intel has announced that Kaby Lake will be "refreshing" Skylake. The previously announced 10-nanometer Cannonlake is coming in 2017, Ice Lake is coming in 2018, and this will be refreshed by Tiger Lake in 2019. So we're already seeing the new three step method of Process, Architecture, Optimization being put into action.
Source: PC Gamer
Posted By CybrSlydr @ 3:17 PM
Tuesday March 15th, 2016
| Every year, I'm tempted to buy a Razer Blade gaming notebook. I haven't yet. Though Razer is the only company consistently making a high-quality, high-performance ultraportable laptop, the high price has always held me back. I just can't bring myself to pay $2,000+ for a computer that won't run next year's games well.
But Razer's new Blade has an answer to my conundrum. Just like the 12.5-inch Razer Blade Stealth that wowed us last month, the new 14-inch Blade is effectively future-proof. If you need more graphical horsepower -- say, in a year or four -- you'll be able to buy a Thunderbolt 3 docking station that adds the full muscle of a desktop graphics card. It lets you easily swap in a new graphics card, whenever you like, without even needing a screwdriver.
Posted By CybrSlydr @ 10:13 AM
Sunday March 13th, 2016
| AMD Zen 8 core high enthusiast Summit Ridge CPUs are allegedly slated for an October release on the new AM4 socket. A source claims that AMD has already taped out the eight core Summit Ridge CPU dies in January and are running them through testing and validation. This is the second major milestone that we’ve heard about Zen thus far. The first being the tape out of the Zen core / microarchitecture back in 2015. This means that not only has the core design been finalized, but the eight core SOC – system on a chip – featuring Zen has also been completed.
Source: WCCF Tech
Finally we get a release date. Looks like 8 cores 16 threads and a lot better IPC.
I'm happy to wait this out so I can save up for it.
Posted By ShrimpBrime @ 6:12 PM
Friday March 11th, 2016
| EverQuest Next, the successor to Sony Online Entertainment's groundbreaking MMO EverQuest, has been cancelled. The bad news was revealed in a message posted by Daybreak Game Company President Russ Shanks, who said the studio had “set out to make something revolutionary,” but ultimately decided that it wasn't going to work.
“For those familiar with the internals of game development, you know that cancellations are a reality we must face from time to time. Inherent to the creative process are dreaming big, pushing hard and being brutally honest with where you land. In the case of EverQuest Next, we accomplished incredible feats that astonished industry insiders,” he wrote. “Unfortunately, as we put together the pieces, we found that it wasn’t fun. We know you have high standards when it comes to Norrath and we do too. In final review, we had to face the fact that EverQuest Next would not meet the expectations we—and all of you—have for the worlds of Norrath.”
“The future of the EverQuest franchise as a whole is important to us here at Daybreak. EverQuest in all its forms is near and dear to our hearts. EverQuest and EverQuest II are going strong. Rest assured that our passion to grow the world of EverQuest remains undiminished.”
In a separate message, Daybreak confirmed that work on EverQuest Next Landmark, the voxel-building MMO it announced back in 2013, is continuing, and that it will be out later this spring.
“As the community has grown and designs have flourished, we no longer view Landmark as just a building tool. We’ve been toiling away making Landmark into a wonder unto itself. While the look of our world was inspired by what was intended to be the voxel world of EverQuest Next, Landmark has evolved into its own game with its own unique identity and purpose,” EverQuest Executive Producer Holly Longdale wrote.
“The creativity of the Landmark community and the potential for telling stories in this digital world is beyond what we imagined. Our vision for Landmark is to provide a place where you can create ANYTHING, tell your own stories, and share your creativity with other players," she continued. "We are wrapping up a HUGE game update for Landmark with LOTS of new additions and improvements, some of which you’ve already seen in sneak preview posts from Emily 'Domino' Taylor on the forums. We are excited about what’s to come for Landmark and we can’t wait to see what you think.”
A “Landmark Launch” FAQ has a few more details, including that Landmark will not be free-to-play as was originally announced, but will instead carry a $10 price tag.
Source: PC Gamer
Posted By CybrSlydr @ 1:46 PM
Tuesday March 1st, 2016
| By Mitch Dyer As Microsoft moves to unify Xbox and PC, the failure of Games for Windows Live still haunts Phil Spencer.
“The amount I see Games for Windows Live come up in my Twitter feed when we talk about PC gaming, it’s staggering,” Spencer, Head of Xbox, said. “We are committed to this space…. We know we have a lot to prove.”
It's true. Microsoft has its heart in the right place, its 2016 goals look strong, and it's making huge, important moves, but it's still repeating the mistakes of its past.
At Microsoft’s Spring Showcase event in San Francisco last week, Phil Spencer spoke at length about a new philosophy for Microsoft: Marrying Xbox and PC gaming. Spencer has said for nearly two years that PC gaming is important to Xbox and its audience, and to gamers as a whole, but it isn’t until now we’re finally seeing the fruits of that ambition come to bear -- both in terms of games we can play and what Spencer has to say.
He explained that separate ecosystems that divide his game progress and friends list “aren’t putting my needs first,” because the biggest gaming audience is a group of people “who just play games.”
Those people can play Forza Motorsport 6: Apex, Rise of the Tomb Raider, Gear of War Ultimate Edition, Ori and the Blind Forest, Quantum Break, Minecraft on VR, and other upcoming Windows 10 games like Sea of Thieves, and Fable Legends. As time goes on, PC players can adapt their hardware to the increasing demand of software, which is something consoles have struggled with since their inception.
The major flaw with generational hardware, Spencer explained, is that “Hardware locks our software and our platforms together at the beginning of the generation.” For about a decade, these machines allow software innovation while restricting hardware innovation “while other platforms get better, faster, stronger.”
That could change with Xbox One, particularly as it draws more and more from PC initiatives -- whether it’s Early Access, user interface options, business models, and more.
Spencer believes “we will see more innovation in the console hardware space than we’ve ever seen. We’ll see us come out with new hardware capability during the generation, and allow us to run the same games forward and backward compatible.”
With Microsoft’s drive to marry its ecosystems -- with “Universal Windows Applications” running on the “Universal Windows Platform” -- Microsoft can “focus more on hardware innovations without invalidating the games that run on that platform.”
Old games and new games can coexist without the player losing anything, regardless of whether it’s a PC, Xbox One, or whatever comes next. Cross-Buy is a huge step in the right direction.
This all sounds great, but the fatal flaw is the Windows 10 Store, an exclusionary marketplace that actively ignores PC gamers’ needs and contradicts Spencer’s intent. Limiting access to Windows 10 games (which are already restricted by their operating system) from players goes against everything Spencer’s otherwise rousing speech about PC gaming stands for.
Microsoft can’t have it both ways. It can’t limit what players can access while simultaneously espousing a philosophy of inclusion. The intent is good, but it’s clear there isn’t a full commitment to gamers or games as Spencer says -- it’s a commitment to Microsoft platforms.
If Microsoft is truly committed to the PC gamer, and really means to have a more neutral stance about how and where its players enjoy its games, Windows 10 games cannot be exclusive to the Windows 10 store. This is a backward line of thinking that aligns with Spencer’s frustrations about software relying on hardware during console transitions. For Microsoft’s PC initiative to avoid going directly against Spencer’s commendable goals, games absolutely must exist wherever players of any stripe want to play -- or Microsoft will repeat its mistakes with Games for Windows Live.
Indeed, Microsoft has a lot to prove.
Posted By CybrSlydr @ 10:00 AM
Tuesday February 16th, 2016
| By Kyree Leary A new data format has been created that can last longer than the lifespan of the Universe.
Scientists from the University of Southampton's Optoelectronics Research Centre (ORC) used nanostructured glass to create Five Dimensional (5D) glass discs that are capable of storing 360 TB of data for up to 13.8 billion years.
The technology was first demonstrated back in 2013, but the scientists have since perfected the 'Superman memory crystal,' and have even began to preserve several major documents. This includes the Universal Declaration of Human Rights (UDHR), Isaac Newton’s Opticks, the Magna Carta and the Kings James Bible.
The data itself is recorded onto the glass using an ultrafast laser, which writes the information in three layers of nanostructured dots separated by five micrometers (or one millionth of a meter).
"It is thrilling to think that we have created the technology to preserve documents and information and store it in space for future generations," said Professor Peter Kazansky, the team's lead researcher. "This technology can secure the last evidence of our civilization: all we’ve learnt will not be forgotten."
A paper detailing the scientists' work will be presented at the Society for Optical Engineering Conference in San Francisco on February 17. The team is also looking for industry partners, as they want to continue development on five-dimensional data storage and perhaps even commercialize it.
Posted By CybrSlydr @ 9:19 PM
Tuesday February 2nd, 2016
| Happy using Windows 7 or Windows 8? You might not be happy much longer, because Microsoft has announced Windows 10 will now start installing automatically on Windows 7 and Windows 8 PCs…
The development sees Microsoft follow through on its controversial October roadmap which said Windows 10 would have its status changed in 2016 to become a ‘Recommended’ upgrade in Windows Update. In basic terms this means anyone who uses Windows 7 or Windows 8 with default Windows Update settings (the vast majority) will now see Windows 10 begin installing by itself.
Posted By FunkZ @ 3:01 PM
Monday February 1st, 2016
| SOURCE:PC World
There’s nothing more frustrating than dealing with slow Internet at home, especially when you’re paying a steep premium for a fast connection speed. Washington, DC-based Reddit user and Comcast customer AlekseyP came up with an interesting solution for this problem. Instead of wasting time calling up Comcast over the issue, he is using the power of Raspberry Pi to complain to the Internet Service Provider over Twitter under the name @A_Comcast_User.
Every hour, AlekseyP’s Raspberry Pi (he didn’t specify which model) runs Internet speed tests and then stores that data. If his Internet speed drops below 50 megabits per second, the Pi tweets at Comcast about the slow speeds. AlekseyP says he pays for 150mbps down and 10mbps up.
Since AlekseyP’s Twitter script went live on October 30, his bot has tweeted at Comcast 16 times over Internet connection speeds. He says Internet usage at home is not causing the drop in bandwidth. In fact, he says that many times the tweets happened when no one was at home, or late at night when everyone was asleep.
Comcast tends to respond to most direct consumer complaints on Twitter and in this respect the company hasn’t failed AlekseyP. But the Reddit user declines Comcast’s request for help every time it’s offered. “I have chosen not to provide them my account or address because I do not want to singled out as a customer; all their customers deserve the speeds they advertise,” he said on Reddit.
The impact on you at home: If you’re a Comcast customer, or with another ISP that handles customer service on Twitter, you can play along with a Raspberry Pi, too. AlekseyP posted the code to his Python script on Pastebin. This code will help get you started, but you’ll also need to install dependent programs and utilities such as speedtest-cli, a command line interface program that tests your bandwidth speeds via speedtest.net. Python, the core scripting language behind the tool, should already be installed on your Raspberry Pi’s operating system.
Posted By CybrSlydr @ 12:13 PM