HI-TECH NEWS with #AMD hashtag

#AI, #AMD, #Intel, #Nvidia
by Amanda's
Comments: 0

Intel is the king of a shrinking kingdom. Every traditional desktop or laptop PC runs on the Santa Clara company’s processors, but that tradition is fast being eroded by more mobile, ARM-powered alternatives. Apple’s most important personal computers now run iOS, Google’s flagship Chromebook has an ARM flavor, and Microsoft just announced Windows for ARM. And what’s more, the burden of processing tasks is shifting away from the personal device and distributed out to networks of server farms up in the proverbial cloud, leaving Intel with a big portfolio of chips and no obvious customer to sell millions of them to.

One of the longest running bugbears of PC gaming is screen tearing — ever so often you’ll notice that when in motion, visuals on screen appear torn or distorted, even with the best possible PC. Screen tearing happens when the PC’s graphics card pushes out frames either faster or slower than the monitor can refresh its image, resulting in visible jitter and split frames. While only seen for a fraction of a second, tearing is extremely jarring can ruin the experience of playing a game.

by AnandaAnatoly
Comments: 0

If you've picked up one of AMD's Radeon RX 480 graphics cards with 4GB of video memory, there's a chance you could double that RAM with a relatively simple tweak.

So what's the story here? Apparently, some RX 480 models with 4GB actually have 8GB of memory on board in physical terms, and have simply been limited to addressing half of that – a restriction that can be circumvented simply by flashing with the BIOS of the 8GB flavor.

by ALI
Comments: 0

AMD at Computex 2016 has unveiled its 7th generation laptop processors. Apart from two FX-Series APUs, the company introduced seven new APUs in its A-Series and E-Series. The chipmaker also unveiled the Radeon RX480 graphics card based on its Polaris architecture - the card is VR ready, AMD says, and has an suggested retail price of just $199 - seriously lowering the price bar for PC users to correctly enjoy virtual reality. AMD's upcoming Zen-based Summit Ridge processors were also partially detailed at the event.

by Paulite
Comments: 0
AMD's new desktop chips run faster, cooler and quieter than ever

PC gamers will want to take a look at this

 

AMD has garnered a reputation in the PC component world of focusing on throughput over thermals, or power over practicality. That's been true for some time, but the chipmaker is looking to turn that around with its latest desktop processors for 2016.

The company's big ticket item is more of an upgrade to one of the firm's most popular chips: the AMD FX-8370 has now been equipped with a newly-designed cooler called the Wraith. This massive black cooler replaces the chip's current stock cooler.

But looks can be deceiving as, despite its size, this cooler is said to run at a maximum of 39 decibels (dbA) – or "practically inaudible," according to AMD. And, for the show-off PC gamers out there, the cooler features its own backlit illumination.

How does it work? Mostly thanks to relatively enormous size, the Wraith Cooler has 24% more cooling fin surface area and thus 34% more airflow, generating one-tenth the noise of its predecessor. In short, bigger fans have to work less to move air.

Putting the icing on the cake, AMD says that the Wraith Cooler will come stock on new runs of the FX 8370 at no additional cost. The current chip with the older cooler will drop in price relative to the new hotness. (NewEgg pegs the current FX 8370 at $209 .

Chips at an affordable buy-in

AMD also introduces three new chips for its new line of FM2+ socket motherboards. The high-performance, quad-core $69.99  Athlon X4 845 (3.5-3.8GHz, no GPU, 65W), the dual-core, not-yet-priced A6-7470K (3.7-4.0GHz , 4 GPU cores, 65W) and the quad-core $117.99  A10-7860K (3.6-4.0GHz, 8 GPU cores, 65W).

Both the Athlon and A10 chips in this lineup use AMD's updated thermal solution, a red, Wraith-like cooler that allows these CPUs to operate as if they had 95W of available power at 65W thermal design power ratings.

by Mobileshop.ae
Comments: 0
Asynchronous compute, AMD, Nvidia, and DX12: What we know so far

Ever since DirectX 12 was announced, AMD and Nvidia have jockeyed for position regarding which of them would offer better support  for the new API and its various features. One capability that AMD has talked up extensively is GCN’s support for asynchronous compute . Asynchronous compute allows all GPUs based on AMD’s GCN architecture to perform graphics and compute workloads simultaneously. Last week, an Oxide Games employee reported that contrary to general belief, Nvidia hardware couldnt perform Asynchronous compute  and that the performance impact of attempting to do so was disastrous on the company’s hardware.

This announcement kicked off a flurry of research into what Nvidia hardware did and did not support, as well as anecdotal claims that people would (or already did) return their GTX 980 Ti’s based on Ashes of the Singularity performance. We’ve spent the last few days in conversation with various sources working on the problem, including Mahigan and CrazyElf at Overclock.net, as well as parsing through various data sets and performance reports. Nvidia has not responded to our request for clarification as of yet, but here’s the situation as we currently understand it.

Nvidia, AMD, and asynchronous compute

When AMD and Nvidia talk about supporting asynchronous compute, they aren’t talking about the same hardware capability. The Asynchronous Command Engines in AMD’s GPUs (between 2-8 depending on which card you own) are capable of executing new workloads at latencies as low as a single cycle. A high-end AMD card has eight ACEs and each ACE has eight queues. Maxwell, in contrast, has two pipelines, one of which is a high-priority graphics pipeline. The other has a a queue depth of 31 — but Nvidia can’t switch contexts anywhere near as quickly as AMD can.

According to a talk given at GDC 2015, there are restrictions on Nvidia’s preeemption capabilities. Additional text below the slide explains that “the GPU can only switch contexts at draw call boundaries” and “On future GPUs, we’re working to enable finer-grained preemption, but that’s still a long way off.” To explore the various capabilities of Maxwell and GCN, users at Beyond3D and Overclock.net have used an asynchronous compute tests that evaluated the capability on both AMD and Nvidia hardware. The benchmark has been revised multiple times over the week, so early results aren’t comparable to the data we’ve seen in later runs.

Note that this is a test of asynchronous compute latency, not performance. This doesn’t test overall throughput — in other words, just how long it takes to execute — and the test is designed to demonstrate if asynchronous compute is occurring or not. Because this is a latency test, lower numbers (closer to the yellow “1” line) mean the results are closer to ideal.

Radeon R9 290

Here’s the R9 290’s performance. The yellow line is perfection — that’s what we’d get if the GPU switched and executed instantaneously. The y-axis of the graph shows normalized performance to 1x, which is where we’d expect perfect asynchronous latency to be. The red line is what we are most interested in. It shows GCN performing nearly ideally in the majority of cases, holding performance steady even as thread counts rise. Now, compare this to Nvidia’s GTX 980 Ti.

vevF50L

Attempting to execute graphics and compute concurrently on the GTX 980 Ti causes dips and spikes in performance and little in the way of gains. Right now, there are only a few thread counts where Nvidia matches ideal performance (latency, in this case) and many cases where it doesn’t. Further investigation has indicated that Nvidia’s asynch pipeline appears to lean on the CPU for some of its initial steps, whereas AMD’s GCN handles the job in hardware.

Right now, the best available evidence suggests that when AMD and Nvidia talk about asynchronous compute, they are talking about two very different capabilities. “Asynchronous compute,” in fact, isn’t necessarily the best name for what’s happening here. The question is whether or not Nvidia GPUs can run graphics and compute workloads concurrentlyAMD can, courtesy of its ACE units.

 

It’s been suggested that AMD’s approach is more like Hyper-Threading, which allows the GPU to work on disparate compute and graphics workloads simultaneously without a loss of performance, whereas Nvidia may be leaning on the CPU for some of its initial setup steps and attempting to schedule simultaneous compute + graphics workload for ideal execution. Obviously that process isn’t working well yet. Since our initial article, Oxide has since stated the following:

“We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute.”

Here’s what that likely means, given Nvidia’s own presentations at GDC and the various test benchmarks that have been assembled over the past week. Maxwell does not have a GCN-style configuration of asynchronous compute engines and it cannot switch between graphics and compute workloads as quickly as GCN. According to Beyond3D user EXt3h :

“There were claims originally, that Nvidia GPUs wouldn’t even be able to execute async compute shaders in an async fashion at all, this myth was quickly debunked. What become clear, however, is that Nvidia GPUs preferred a much lighter load than AMD cards. At small loads, Nvidia GPUs would run circles around AMD cards. At high load, well, quite the opposite, up to the point where Nvidia GPUs took such a long time to process the workload that they triggered safeguards in Windows. Which caused Windows to pull the trigger and kill the driver, assuming that it got stuck.

“Final result (for now): AMD GPUs are capable of handling a much higher load. About 10x times what Nvidia GPUs can handle. But they also need also about 4x the pressure applied before they get to play out there capabilities.”

Ext3h goes on to say that preemption in Nvidia’s case is only used when switching between graphics contexts (1x graphics + 31 compute mode) and “pure compute context,” but claims that this functionality is “utterly broken ” on Nvidia cards at present. He also states  that while Maxwell 2 (GTX 900 family) is capable of parallel execution, “The hardware doesn’t profit from it much though, since it has only little ‘gaps’ in the shader utilization either way. So in the end, it’s still just sequential execution for most workload, even though if you did manage to stall the pipeline in some way by constructing an unfortunate workload, you could still profit from it.”

Nvidia, meanwhile, has represented to Oxide that it can implement asynchronous compute, however, and that this capability was not fully enabled in drivers. Like Oxide, we’re going to wait and see how the situation develops. The analysis thread at Beyond3D makes it very clear that this is an incredibly complex question, and much of what Nvidia and Maxwell may or may not be doing is unclear.

Earlier, we mentioned that AMD’s approach to asynchronous computing superficially resembled Hyper-Threading. There’s another way in which that analogy may prove accurate: When Hyper-Threading debuted, many AMD fans asked why Team Red hadn’t copied the feature to boost performance on K7 and K8. AMD’s response at the time was that the K7 and K8 processors had much shorter pipelines and very different architectures, and were intrinsically less likely to benefit from Hyper-Threading as a result. The P4, in contrast, had a long pipeline and a relatively high stall rate. If one thread stalled, HT allowed another thread to continue executing, which boosted the chip’s overall performance.

GCN-style asynchronous computing is unlikely to boost Maxwell performance, in other words, because Maxwell isn’t really designed for these kinds of workloads. Whether Nvidia can work around that limitation (or implement something even faster) remains to be seen