Why data-driven businesses need a data catalog

Relational databases, data lakes, and NoSQL data stores are powerful at inserting, updating, querying, searching, and processing data. But the ironic aspect of working with data management platforms is they usually don’t provide robust tools or user interfaces to share what’s inside them. They are more like data vaults. You know there’s valuable data inside, but you have no easy way to assess it from the outside.

The business challenge is dealing with a multitude of data vaults: multiple enterprise databases, smaller data stores, data centers, clouds, applications, BI tools, APIs, spreadsheets, and open data sources.

Read More


Facebook releases low-latency online speech recognition framework

Facebook AI Research (FAIR) today said it’s open-sourcing [email protected], a deep learning-based inference framework that achieves fast performance for online automatic speech recognition in cloud or embedded edge environments. [email protected] is based on neural net-based language models wav2letter and wav2letter++, which upon its release in December 2018, FAIR called the fastest open source speech recognition system available.

Automatic speech recognition, or ASR, is used to turn audio of spoken words into text, then infer the speaker’s intent in order to carry out a task. An API available on GitHub though the wav2letter++ repository is built to support concurrent audio streams and popular kinds of deep learning speech recognition models like convolutional neural networks (CNN) or recurrent neural networks (RNN) in order to deliver scale necessary for online ASR.

Read More

Tool predicts how fast code will run on a chip

MIT researchers have invented a machine-learning tool that predicts how fast computer chips will execute code from various applications.  

To get code to run as fast as possible, developers and compilers — programs that translate programming language into machine-readable code — typically use performance models that run the code through a simulation of given chip architectures. 

Compilers use that information to automatically optimize code, and developers use it to tackle performance bottlenecks on the microprocessors that will run it. But performance models for machine code are handwritten by a relatively small group of experts and are not properly validated. As a consequence, the simulated performance measurements often deviate from real-life results. 

Read More


A Sobering Message About the Future at AI's Biggest Party

More than 13,000 artificial intelligence mavens flocked to Vancouver this week for the world’s leading academic AI conference, NeurIPS. The venue included a maze of colorful corporate booths aiming to lure recruits for projects like software that plays doctor. Google handed out free luggage scales and socks depicting the colorful bikes employees ride on its campus, while IBM offered hats emblazoned with “I ❤️A👁.”

Tuesday night, Google and Uber hosted well-lubricated, over-subscribed parties. At a bleary 8:30 the next morning, one of Google’s top researchers gave a keynote with a sobering message about AI’s future.

Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.

Read More

Tech firms are winning the AI race because they understand data better than other sectors

Artificial intelligence is already powering much of the technology helping to drive the modern economy. AI is now an essential part of how we use the internet but can also be found in stock exchanges, advanced factories and automated warehouses. It is starting to drive our cars and even vacuum our floors. And yet only a fraction of companies which stand to significantly benefit from AI are exploiting this approach to help deliver their products and services.

One important reason for this is a lack of high-quality data. Technology giants such as Google, Microsoft and Amazon have been able to make great strides in AI—developing software to answer our questions and identify what's in our photos—because of their vast data-gathering operations. But many established industries that could benefit from AI and advanced robotics are struggling to gather, manage and use data in a helpful way.

Having high-quality and trustworthy data is key to helping companies to better understand their markets and customers and enable automated decision making. At an infrastructure level, data can guide planners and developers and help optimize the use and maintenance of buildings, roads and railways. This could also help reduce carbon emissions by making our infrastructure last longer and work more efficiently, helping to reduce wasted energy and unnecessary traffic.

Read More

Apple's Latest Deal Shows How AI Is Moving Right Onto Devices

Apple dropped $200 million this week on a company that makes lightweight artificial intelligence. It’s all about keeping an edge in AI ... by adding more AI to the edge.

The acquisition of, a Seattle startup working on low-power machine learning software and hardware, points to a key AI battleground for Apple and other tech heavyweights—packing ever-more intelligence into smartphones, smartwatches, and other smart devices that do computing on the “edge” rather that in the cloud. And doing it without killing your battery.



“Machine learning is going to happen at the edge in a big way,” predicts Subhasish Mitra, a professor at Stanford who is working on low-power chips for AI. “The big question is how do you do it efficiently? That requires new hardware technology and design. And, at the same time, new algorithms as well.”

Read More
Copyright © 2020 Audio Bee. All rights reserved.