HI-TECH NEWS

by Mobileshop.ae
Comments: 1
Surfing obscene websites is criminal offence: Abu Dhabi

Abu Dhabi authorities have issued a stern warning to the public against using and surfing obscene websites, saying offenders can be easily detected and could face at least six months in prison plus up to Dh1 million fine.
 
In a statement, Judicial Department said the emirate’s security authorities have the technology to detect all those who use, surf or download obscene and porno programmes online wherever they are inside and outside the UAE.
 
It said authorities, who are cooperating with competent security systems abroad, have recently tracked and arrested six men involved in such offences.
 
“The security authorities in Abu Dhabi will not tolerate such serious offences, especially those  involving obscene films for children… authorities have the technology to detect all those involved in these crimes although they use software programmes allowing them to hack banned websites in the country,” said the statement, published in the Dubai-based Arabic language daily ‘Al Bayan’.
 
“Our experts have succeeded recently, in collaboration with international authorities, to identify and locate the whereabouts of six men involved in using and publishing obscene online films for children…they have been arrested for interrogation.”
 
The statement said those convicted in such offences would be jailed for at least six months and fined between Dh150,000 and Dh1m.

by Mobileshop.ae
Comments: 1
Facebook now lets journalists broadcast live video to your News Feed

Celebrities like Dwayne Johnson aren't the only people allowed to use Facebook's live streaming app anymore. The company is opening up the future, which works through a standalone app called Mentions, to journalists with verified Facebook profiles and / or pages. Previously, only high-profile public figures like actors, athletes, musicians, politicians, and other "influencers" were permitted to stream live video to users through the Mentions app. Replays are available once the initial live stream is compete, and Facebook also allows comments and likes alongside broadcasts like YouTube and Periscope.

"We want to make Facebook a better experience for journalists whether it’s used for news-gathering or better connecting with their readers or to drive distribution to their content," Vadim Lavrusik, product manager for Mentions,told wired . With the 2016 presidential campaign season underway, it's certainly a good time for Facebook to expand its criteria for who's allowed to broadcast live clips; they'll at least (probably) be more engaging than your random bit of self promotion from Michael Bublé, Serena Williams, or The Rock. If you think you're important enough to be streaming.

by Mobileshop.ae
Comments: 0
Can Apple Finally Take Over the Living Room?

On Wednesday at a media event in San Francisco, Apple unveiled a slate of new products, including one small, slender one with a touch pad, a button to access Siri, and a built-in accelerometer and gyroscope.

It wasn’t a new iPhone—though that, along with a bigger iPad, was among the announcements. Rather, it was a remote control for the new version of Apple TV, Apple’s contender for managing living room entertainment, which hasn’t been updated since 2012.

That wait may prove a good thing for the company. In the interim, consumers have gotten used to the idea of Internet-connected TVs that use a variety of apps, including video-streaming services like Netflix and Hulu, but no company has really emerged as a leader in tying it all together. The newest Apple TV, which will be available in late October for $149 with 32 gigabytes of storage or $199 with 64 gigabytes, is attempting to do that by combining an interface that has lots of these apps for everything from watching movies to shopping and appears easy to navigate with the new remote.

The glass-topped touch pad is the most obvious way to navigate with Apple TV, as you can just swipe to get from one movie to the next and press down on it to select one.

Siri is clearly intended to be a big part of the navigation, too, as it gets its own button on the remote. Giving a demonstration of the device, Apple senior design producer Jen Folse showed how you can use Siri to look for very specific things like “show me that Modern Family episode with Edward Norton,” which Siri can find by searching through a bunch of services including iTunes, Netflix, Hulu, and HBO (you can also filter searches based on factors like cast, directors, and age ratings).

Apple is also trying to make apps a big part of Apple TV by adding a built-in app store. The device runs on the company’s tvOS, which is based on its mobile software, iOS, and developers will be able to make their own apps, which Apple is hoping will bring a range of activities to the living room. The company showed off several of them on Wednesday, including a shopping app from Gilt and a couple of games that rely on the new remote as a controller by taking advantage of the accelerometer, gyroscope, and buttons (some games will apparently allow multiple players if you use an iPhone or iPod Touch as an additional controller).

In a hands-on demo, I found the Apple TV controller easy to use, while the menu for finding movies, looking at photos, playing games, and so forth is also pretty simple to navigate. A button on the remote is a direct link to Siri, and when I asked her to “find some awesome movies from the ′90s” she complied with a bunch of suggestions ranging from Jurassic Park to Babe (I guess we have different ideas of what “awesome” means).

The touch pad on the top of the controller worked smoothly, though I didn’t get to run it through that many activities. I did try it out with a game, though, in which I played a blocky animated chicken trying desperately to cross the road (I was quickly hit by a car).

Dan Cryan, an analyst with IHS, says the upgrade shows Apple TV has “stopped being a hobby” for Apple, in large part because of the addition of the app store. He also says the app store could mean that a lot more video services have easy access to your TV.

He says cable’s dominance over home video entertainment is unlikely to go away anytime soon, though, adding that while Apple TV is a nice upgrade, “it’s unlikely to tip the world on its head overnight.”

by Mobileshop.ae
Comments: 2
Google Docs Voice Typing lets you speak instead of type

Last week, Google announced  it has added free speech-to-text capabilities to Google Docs (Google calls it Voice Typing). This would have been huge news 20 years ago, yet when Google unveiled it, it was only described in a single paragraph in a middle of a larger blog entry. In a world with Apple’s Siri, Microsoft’s Cortana, and Google Now, a free speech-to-text service that works on multiple computing platforms may not seem like big news anymore.

Voice Typing is different, though; it’s kind of a built-in version of Dragon NaturallySpeaking (for those of you who remember and/or still use that program). googledocs_stt_iosVoice Typing works in Chrome on the desktop, as well as the Docs apps for Apple iOS (iPhone and iPad) and Android.

Here’s how it works: To start voice typing on an iOS device, tap the microphone icon to the left of the spacebar near the bottom of the screen. Tap the microphone icon on the right side of the screen above the on-screen keyboard to start Voice Typing on an Android phone or tablet. If you want to voice type on a Mac or Windows PC, you need to use Google Docs in a Chrome web browser. Then, select Tools > Voice Typing. You will see a microphone icon appear with the tool tip “Click to speak” appear in the browser screen near your Docs document.

Google Docs Voice Typing currently supports 48 languages, including regional variants of Chinese, English, Portuguese, and Spanish. You do not need to perform any kind of training before using Voice Typing, and it doesn’t appear to need a special microphone. For this article, I used the built-in microphones of my Dell Windows notebook, a Nexus 6, and an iPhone 6+ to test Google’s speech-to-text.

Voice Typing does require you speak words to add punctuation: “Period”, “Comma”, “Exclamation point”, “Question mark”, “New line”, and “New paragraph.” Unlike dedicated speech-to-text systems, Voice Typing does not have a way to correct or change text using just your voice. With Voice Typing left turned on, you must use your keyboard (physical or on-screen) to make changes to text.

 

In addition to my regular voice, I tested how well Voice Typing would work on truly continuous speech by playing a stephen colbert video on youtube  into the microphone of my Nexus 6 phone running the Google Docs app. Google Docs recorded 288 words using Voice Typing by the time I pressed the Pause button. It looked like it did a credible job of performing speech-to-text of a person speaking relatively fast. My rough estimate is that it was about 85 to 90% correct. And, of course, there is no punctuation, since you need to actually speak the punctuation marks for it to appear in the document.

One tip: Voice Typing doesn’t like it when you swear. For example, If I say, “What the f***?”, it will censor the text of the censored word. This was, appropriately enough, first noted in a blog about the linguistics of swearing .

I started, but didn’t finish, writing this article using Voice Typing. Unless you are a smooth extemporaneous speaker (I am not), it is not the fastest way to write more than a few sentences of text. And, like all speech-to-text systems, it works best in a relatively quiet environment. I’m not sure if I will use Voice Typing regularly. I can see myself using it to make a few notes on my phone. And it may be interesting to see how well it performs in an interview situation with multiple people.

by Mobileshop.ae
Comments: 0
Asynchronous compute, AMD, Nvidia, and DX12: What we know so far

Ever since DirectX 12 was announced, AMD and Nvidia have jockeyed for position regarding which of them would offer better support  for the new API and its various features. One capability that AMD has talked up extensively is GCN’s support for asynchronous compute . Asynchronous compute allows all GPUs based on AMD’s GCN architecture to perform graphics and compute workloads simultaneously. Last week, an Oxide Games employee reported that contrary to general belief, Nvidia hardware couldnt perform Asynchronous compute  and that the performance impact of attempting to do so was disastrous on the company’s hardware.

This announcement kicked off a flurry of research into what Nvidia hardware did and did not support, as well as anecdotal claims that people would (or already did) return their GTX 980 Ti’s based on Ashes of the Singularity performance. We’ve spent the last few days in conversation with various sources working on the problem, including Mahigan and CrazyElf at Overclock.net, as well as parsing through various data sets and performance reports. Nvidia has not responded to our request for clarification as of yet, but here’s the situation as we currently understand it.

Nvidia, AMD, and asynchronous compute

When AMD and Nvidia talk about supporting asynchronous compute, they aren’t talking about the same hardware capability. The Asynchronous Command Engines in AMD’s GPUs (between 2-8 depending on which card you own) are capable of executing new workloads at latencies as low as a single cycle. A high-end AMD card has eight ACEs and each ACE has eight queues. Maxwell, in contrast, has two pipelines, one of which is a high-priority graphics pipeline. The other has a a queue depth of 31 — but Nvidia can’t switch contexts anywhere near as quickly as AMD can.

According to a talk given at GDC 2015, there are restrictions on Nvidia’s preeemption capabilities. Additional text below the slide explains that “the GPU can only switch contexts at draw call boundaries” and “On future GPUs, we’re working to enable finer-grained preemption, but that’s still a long way off.” To explore the various capabilities of Maxwell and GCN, users at Beyond3D and Overclock.net have used an asynchronous compute tests that evaluated the capability on both AMD and Nvidia hardware. The benchmark has been revised multiple times over the week, so early results aren’t comparable to the data we’ve seen in later runs.

Note that this is a test of asynchronous compute latency, not performance. This doesn’t test overall throughput — in other words, just how long it takes to execute — and the test is designed to demonstrate if asynchronous compute is occurring or not. Because this is a latency test, lower numbers (closer to the yellow “1” line) mean the results are closer to ideal.

Radeon R9 290

Here’s the R9 290’s performance. The yellow line is perfection — that’s what we’d get if the GPU switched and executed instantaneously. The y-axis of the graph shows normalized performance to 1x, which is where we’d expect perfect asynchronous latency to be. The red line is what we are most interested in. It shows GCN performing nearly ideally in the majority of cases, holding performance steady even as thread counts rise. Now, compare this to Nvidia’s GTX 980 Ti.

vevF50L

Attempting to execute graphics and compute concurrently on the GTX 980 Ti causes dips and spikes in performance and little in the way of gains. Right now, there are only a few thread counts where Nvidia matches ideal performance (latency, in this case) and many cases where it doesn’t. Further investigation has indicated that Nvidia’s asynch pipeline appears to lean on the CPU for some of its initial steps, whereas AMD’s GCN handles the job in hardware.

Right now, the best available evidence suggests that when AMD and Nvidia talk about asynchronous compute, they are talking about two very different capabilities. “Asynchronous compute,” in fact, isn’t necessarily the best name for what’s happening here. The question is whether or not Nvidia GPUs can run graphics and compute workloads concurrentlyAMD can, courtesy of its ACE units.

 

It’s been suggested that AMD’s approach is more like Hyper-Threading, which allows the GPU to work on disparate compute and graphics workloads simultaneously without a loss of performance, whereas Nvidia may be leaning on the CPU for some of its initial setup steps and attempting to schedule simultaneous compute + graphics workload for ideal execution. Obviously that process isn’t working well yet. Since our initial article, Oxide has since stated the following:

“We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute.”

Here’s what that likely means, given Nvidia’s own presentations at GDC and the various test benchmarks that have been assembled over the past week. Maxwell does not have a GCN-style configuration of asynchronous compute engines and it cannot switch between graphics and compute workloads as quickly as GCN. According to Beyond3D user EXt3h :

“There were claims originally, that Nvidia GPUs wouldn’t even be able to execute async compute shaders in an async fashion at all, this myth was quickly debunked. What become clear, however, is that Nvidia GPUs preferred a much lighter load than AMD cards. At small loads, Nvidia GPUs would run circles around AMD cards. At high load, well, quite the opposite, up to the point where Nvidia GPUs took such a long time to process the workload that they triggered safeguards in Windows. Which caused Windows to pull the trigger and kill the driver, assuming that it got stuck.

“Final result (for now): AMD GPUs are capable of handling a much higher load. About 10x times what Nvidia GPUs can handle. But they also need also about 4x the pressure applied before they get to play out there capabilities.”

Ext3h goes on to say that preemption in Nvidia’s case is only used when switching between graphics contexts (1x graphics + 31 compute mode) and “pure compute context,” but claims that this functionality is “utterly broken ” on Nvidia cards at present. He also states  that while Maxwell 2 (GTX 900 family) is capable of parallel execution, “The hardware doesn’t profit from it much though, since it has only little ‘gaps’ in the shader utilization either way. So in the end, it’s still just sequential execution for most workload, even though if you did manage to stall the pipeline in some way by constructing an unfortunate workload, you could still profit from it.”

Nvidia, meanwhile, has represented to Oxide that it can implement asynchronous compute, however, and that this capability was not fully enabled in drivers. Like Oxide, we’re going to wait and see how the situation develops. The analysis thread at Beyond3D makes it very clear that this is an incredibly complex question, and much of what Nvidia and Maxwell may or may not be doing is unclear.

Earlier, we mentioned that AMD’s approach to asynchronous computing superficially resembled Hyper-Threading. There’s another way in which that analogy may prove accurate: When Hyper-Threading debuted, many AMD fans asked why Team Red hadn’t copied the feature to boost performance on K7 and K8. AMD’s response at the time was that the K7 and K8 processors had much shorter pipelines and very different architectures, and were intrinsically less likely to benefit from Hyper-Threading as a result. The P4, in contrast, had a long pipeline and a relatively high stall rate. If one thread stalled, HT allowed another thread to continue executing, which boosted the chip’s overall performance.

GCN-style asynchronous computing is unlikely to boost Maxwell performance, in other words, because Maxwell isn’t really designed for these kinds of workloads. Whether Nvidia can work around that limitation (or implement something even faster) remains to be seen

by Mobileshop.ae
Comments: 1
A laser and a Raspberry Pi can disable a self-driving car

In the self-driving car business, decisions are only as good as your sensor data. While the state-of-the-art Velodyne LIDAR that adorns benchmark research vehicles will set you back $80,000, winning the DARPA urban challenge probably requires the better part of a cool million. Unfortunately, all it may take to spoof sophisticated sensors like these is a cheap laser pointer pulsed by something as simple as an Arduino or Raspberry Pi.

As IEEE Spectrum reports, security specialist Jonathan Petit will be presenting a disturbingly easy new hack this November at the Black Hat Europe conference. After recording the probe signals from an IBEO Lux lidar unit, Petit simply fired them back at the emitter using his laser. As long as they were synchronized, the lidar unit ‘saw’ an illusory object in front of it. The trick works up to 100 meters away in any direction — at the front, back or side — and doesn’t even require a tightly focused beam.

Although other hacks like spoofing the vehicle’s GPS or tire sensors have been done before, Petit’s hack could potentially bring a vehicle at speed to a full stop. Several 3D-rendered vehicles could be placed not only in front of the car, but actively moving toward it. That would present quite a gauntlet to any control now on the market.

Lidar systems don’t operate in a radiation band that is licensed like short-range radar. Nor do they typically encode or encrypt their pulses. These realities makes them particularly vulnerable to anyone deliberately targeting them. But lidar systems are evolving rapidly, becoming not just cheaper, but more capable. So-called ‘sensor-fusion’ technology is also evolving to the point where hacking just a single sensor, or a single kind of sensor, may not be enough to overwhelm the system.

 

For example, last week a radically new kid of  was laser systen in Scientific Reportsthat combines an electrically-pumped vertical-cavity surface-emitting laser (VCSEL) with a micromechanical resonator on a single chip. This device would be able to sweep the output beam across a broad wavelength band in a microsecond (as opposed to 10 milliseconds) to create a highly efficient LIDAR source beam. Putting the wavelength control functions inside the laser itself in this way would mean tiny, fast, and low-power sensors at a fraction of the cost.

by Mobileshop.ae
Comments: 1
UAE residents' front-row seats for iPhone 6s, 6s Plus launch tonight

What will gadget-loving UAE residents be doing at 9pm on 9-9 (tonight)?

We’re bound to be glued to our screens to figure out what Apple has in store for them in its latest instalment of iPhones – the iPhone 6s and the iPhone 6s Plus.

With just hours to go for the covers to be taken off the new devices, rumours are flying thick and fast about what the new iPhones may or may not sport under their hoods.

UAE residents are indeed early adopters of technology, and that’s one of the reasons why smartphone manufacturers from BlackBerry to Samsung have Dubai in their global launch schedules.

by Mobileshop.ae
Comments: 1
Dubai's Dh25bn Mall of the World will now be a 'future city’

Dubai Holding is currently re-working the master plan of the mega Mall of the World development and now aims to build it as 'future city' of Dubai.

The new components of the development include residential and office with the master plan being re-engineered to integrate public transport systems in the form of Metro, tram, buses, water transport, etc., to ease traffic within and outside the development

“The project has not been stalled… it is in redevelopment stage.

"What Dubai Holding wants to do given the strategic location of the land – which is as large as Downtown Dubai - is to have patience and search for what is going to be the very best result for the site?” said Morgan Parker, Chief Operating Officer of Sufouh Development, the new company set up to oversee the development of Mall of the World.

“We are trying to forecast what Dubai is going to be 50 years from now and so we are not building a project that is a statement on the world today,” he asserted.

The site in the Al Sufouh area is currently occupied by the Dubai Police Academy.

“The academy is not moving for next two years and so we have time to find the best solutions not just for development but also for the surrounding area.

"Our challenge is to create a tourism destination in a climate-controlled environment and not to create any congestion and traffic jams.”

The phase one of the development is likely to begin in the next 18 months with full construction planned only after the existing academy relocates to its new location in Dubai Academic City.

The project, announced at Cityscape 2014, is likely to cost Dh25 billion and will include a shopping mall with an area of eight million square feet, the world’s largest theme park, which will be covered by a glass dome that will be open during the winter months, a wellness dedicated zone, a cultural celebration district and a wide range of hospitality options comprising 20,000 hotel rooms.

“It is not just about creating the world’s largest mall, but seamlessly integrating the hospitality, residential, commercial and entertainment lifestyle options into the bigger picture.”

In fact, the Mall of the World will not have one large mall, but three urban malls, almost two thirds of the size of Mall of the Emirates. There will be dozens of public plazas and entertainment zones, 23 parks, hotels, etc., that will be linked to each other through air-conditioned arcades and climate controlled spaces.

by landmarkshops
Comments: 0
Fallout 4 can be pre-ordered along with the Pip-Boy Replica at Bethesda store

That sounds crazy but you can really pre-order that Pip Boy device along with the game to be on sale starting from November 10, 2015.

Get ready for nuclear apocalypses)))

by landmarkshops
Comments: 0
Spy pictures of the new LG Nexus 5X were revealed

Expected to be on sale by end September 2015, base price expected to be  1500 AED. Device will get Android 6.0 Marshmallow.