The Yottamaster USB C external drive enclosure is a tool-less SSD/2.5″ HD external enclosure that allows you to bring your ultra high speed SSD along, or use it as storage for your gaming system.
There are no tools required to open and close the housing, and nothing to screw the drive into, however they include a couple of drive coozies to make sure the drive doesn’t jiggle around in there.
The upper limits are listed at 5gb and 2TB respectively which is in line for products I can find on Amazon within 1.5x the price range. It doesn’t boast the higher bandwidth of 10gb that you find in the $21 range, but it’s coming in at $9.99 so you get what you’re paying for.
I’d pay attention more to the 2TB drive limit and less to the 5gb/ps rating as SSDs drives are about to explode in size and lower in price, however I don’t think the majority of people are going to notice all that much of a difference between the 5gb and the 10gb speeds when most USB 3 ports are doing 5gbs and most consumer SSDs are doing 6gbps.
So, for $9.99, for use as quick storage for a gaming system, it’s perfect. (PS4 has a USB 3.1/5gb connection, XBOX One appears to be USB 3.0/5gb). If you’re looking for higher end video editing or such, you’ll probably feel a little let down at the ~600megabyte per second speeds and the 2TB limits.
Pricewise, great, get it. Performance – perfect for gaming systems of the current era, drive size limitation may get you in a few years.
Powered by WPeMatico
- Serverless Microservice Patterns for AWS (Jeremy Daly) — I’ve read a lot of posts that mention serverless microservices, but they often don’t go into much detail. I feel like that can leave people confused and make it harder for them to implement their own solutions. Since I work with serverless microservices all the time, I figured I’d compile a list of design patterns and how to implement them in AWS. I came up with 19 of them; though, I’m sure there are plenty more.
- Fans are Better Than Tech at Organizing Information Online (Wired) — coverage of Archive Of Our Own (AO3), a fanfic archive which is nominated for a Hugo this year. AO3’s trick is that it involves humans by design—around 350 volunteer tag wranglers in 2019, up from 160 people in 2012—who each spend a few hours a week deciding whether new tags should be treated as synonyms or subsets of existing tags, or simply left alone. AO3’s Tag Wrangling Chairs estimate that the group is on track to wrangle over two million never-before-used tags in 2019, up from around 1.5 million in 2018.
- Mary Meeker’s Internet Trends, 2019 Edition — like April Fool’s Day, it’s a landmark in the industry, but fewer people look forward to it with glee these days. The big trends driving growth (Moore’s Law, sales of mobile growth, people connected to the internet) have slowed down. Internet ad spend is still rising, customer acquisition costs are going up, etc. Two eye-watering facts: Americans are spending 6.3h on digital media/day, up 7% from the year before, and people are increasingly communicating in images –> 50% of Twitter impressions are of posts with media, which is startling for a medium that was originally SMS.
- Testing Facebook’s Fake Video Policy (Vice) — a fake video of Mark Zuckerberg was uploaded to test their policy. They’re treating it like the earlier Pelosi video: Instead of deleting the video, the company chose to de-prioritize it, so that it appeared less frequently in users’ feeds, and placed the video alongside third-party fact-checker information.
Powered by WPeMatico
- Possible Premium Firefox Coming (ZDNet) — an interesting approach for Firefox, but I’d pay for something as good as Chrome that didn’t have the mixed incentives for developers.
- Graph Processing on FPGAs: Taxonomy, Survey, Challenges — Our survey describes and categorizes existing schemes and explains key ideas. Finally, we discuss research and engineering challenges to outline the future of graph computations on FPGAs.
- Decision Disagreement Framework: How We Encourage Disagreements at Matter — we couldn’t find a framework for handling and supporting disagreements after decisions have been made, especially if you weren’t a part of making that decision. We took inspiration from existing frameworks to create the Decision Disagreement Framework.
- Understanding the Online Safety and Privacy Challenges Faced by South Asian Women — This post, after providing a short background, covers the following topics: Device privacy challenges: This section outlines the privacy challenges faced by South Asian women when using their smartphones; Online safety challenges: Highlights the risks and abuse faced by South Asian women when using online services; Design considerations to promote gender equity: When building products, features that mitigate the risks would help to improve the safety of South Asian women. Ethnographic study that’s super useful for systems designers who aren’t South Asian women.
Powered by WPeMatico
In this post, I share slides and notes from a keynote that Roger Chen and I gave at the 2019 Artificial Intelligence conference in New York City. In this short summary, I highlight results from a — survey (AI Adoption in the Enterprise) and describe recent trends in AI. Over the past decade, AI and machine learning (ML) have become extremely active research areas: the web site arxiv.org had an average daily upload of around 100 machine learning papers in 2018. With all the research that has been conducted over the past few years, it’s fair to say that we now have entered the implementation phase for many AI technologies. Companies are beginning to translate research results and developments into products and services.
An early indicator of commercial activity and interest is the number of patent filings. I was fortunate enough to contribute to a recent research report from the World Intellectual Patent Office (WIPO) that examined worldwide patent filings in areas pertaining to AI and machine learning. One of their key findings is that the number of patent filings is growing fast: in fact, the ratio of patent filings to scientific publications indicates that patent filings are growing at a faster rate than publications.
Looking more closely into specific areas, the WIPO study found that Computer Vision is mentioned in 49% of all AI-related patents (167,000+). In addition, the number of computer vision patent filings is growing annually by an average of 24%, with more than 21,000 patent applications filed in 2016 alone.
It has been an extremely productive year for researchers in natural language. Every few months there seems to be new deep learning models that establish records in many different natural language tasks and benchmarks.
Much of this research was done in the open, accompanied by open source code and pre-trained models. While applications of AI and machine learning and AI to text are not new, the accuracy of some of these models has drawn interest from practitioners and companies. Some of the most popular trainings, tutorials, and sessions at our AI conferences are ones that focus on text and natural language applications. It’s important to point out that, depending on your application or setting, you will likely need to tune these language models for your specific domain and application.
We continue to see improvements in tools for deep learning. Our surveys show that TensorFlow and PyTorch remain the most popular libraries. There are new open source tools like Ludwig and Analytics Zoo aimed at non-experts who want to begin using deep learning. We are also seeing tools from startups like Weights & Bias and Determined AI (full disclosure: I am an advisor to Determined AI), and open source tools like Nauta, designed specifically for companies with growing teams of deep learning engineers and data scientists. These tools optimize compute resources, automate various stages of model building, and help users keep track and manage experiments.
In our survey that drew more than 1,300 respondents, 22% signaled they are beginning to use reinforcement learning (RL), a form of ML that has been associated with recent prominent examples of “self-learning” systems. There are a couple of reasons for this. We are beginning to see more accessible tools for RL—open source, proprietary, and SaaS—and more importantly, companies like Netflix are beginning to share use cases for RL. Focusing on tooling for RL, there have been a variety of new tools that have come online over the last year. For example, Danny Lange and his team at Unity have released a suite of tools that enable researchers and developers to “test new AI algorithms quickly and efficiently across a new generation of robotics, games, and beyond.”
Let’s look at another one of these tools more closely. At our AI conferences, we’ve been offering a tutorial on an open source computing framework called Ray, developed by a team at UC Berkeley’s RISE Lab.
As I noted in a previous post, Ray has grown across multiple fronts: number of users, contributors, and use cases. Ray’s support for both stateless and stateful computations, and fine-grained control over scheduling allows users to implement a variety of services and applications on top of it, including RL. The RL library on top of Ray—RLlib—provides both a unified API for different types of RL training, and all of its algorithms are distributed. Thus, both RL users and RL researchers are already benefiting from using RLlib.
There’s also exciting news on the hardware front. Last year we began tracking startups building specialized hardware for deep learning and AI for training and inference as well as for use in edge devices and in data centers. We already have specialized hardware for inference (and even training—TPUs on the Google Cloud Platform). Toward the latter part of this year, in the Q3/Q4 time frame, we expect more companies to begin releasing hardware that will greatly accelerate training and inference while being much more energy efficient. Given that we are in a highly empirical era for machine learning and AI, tools that can greatly accelerate training time while lowering costs will lead to many more experiments and potential breakthroughs.
In our survey, we found more than 60% of companies were planning to invest some of their IT budget into AI. But the level of investment depended on how much experience a company already had with AI technologies. As you can see in Figure 5, those with a mature practice plan to invest a sizable portion of their IT budget into AI. There’s a strong likelihood that the gap between AI leaders and laggards will further widen.
So, what is holding back adoption of AI? According to our survey, the answer depends on the maturity level of a company.
Those who are just getting started struggle with finding use cases or explaining the importance of AI. Also, we are far from General AI: we are at a stage where these technologies have to be tuned and targeted, and many AI systems work by augmenting domain experts. Thus, these technologies require training at all levels of an organization, not just in technical teams. It’s important that managers understand the capabilities and limitations of current AI technologies, and see how other companies are using AI. Take the case of robotic process automation (RPA), a hot topic among enterprises. It’s really the people closest to tasks (“bottoms up approach”) who can best identify areas where RPA is most suitable.
On the other hand, those with mature AI practices struggle with lack of data and lack of skilled people. Let’s look at the skills gap more closely in Figure 7.
Skills requirements depend on the level of maturity as well. Companies with more mature AI practices have less trouble finding use cases and have less need for data scientists. However, the need for data and infrastructure engineers cuts across companies. It’s important to remember that much of AI today still requires large amounts of training data to train large models that require large amounts of compute resources. I recently wrote about the requisite foundational technologies needed to succeed in machine learning and AI.
As the use of AI technologies grows within companies, we will need better tools for machine learning model development, governance, and operations. We are beginning to see tools that can automate many stages of a machine learning pipeline, help manage the ML model development process, and search through the space of possible neural network architectures. Given the level of excitement around ML and AI, we expect tools in these areas to improve and gain widespread adoption.
With the growing interest in AI among companies, this is a great time to be building tools for ML. When we asked our survey respondents, “Which tools are you planning to incorporate into your ML workflows within the next 12 months?”, we found:
- 48% wanted tools for model visualization
- 43% needed tools for automated model search and hyperparameter tuning
Companies are realizing that ML and AI is much more than optimizing a business or a statistical metric. Over the past year, I’ve tried to summarize some of these considerations under the umbrella of “risk management,” a term and practice area many companies are already familiar with. Researchers and companies are beginning to release tools and frameworks to explain various techniques they are using to develop “responsible AI.” When we asked our survey respondents, “What kinds of risks do you check for during ML model building and deployment?”, we found the following:
- 45% assessed model interpretability and explainability
- 41% indicated that they had tests for fairness and bias
- 35% checked for privacy
- 34% looked into safety and reliability issues
- 27% tested for security vulnerabilities
A word about data security. In the age of AI, there are situations where data integrity will be just as critical as data security. That’s because AI systems are highly dependent on data used for training. Building data infrastructure that can keep track of data governance and lineage will be very important, not only for security and quality assurance audits, but also for compliance with existing and future regulations.
We are very much in the implementation phase for machine learning and AI. The past decade has produced a flurry of research results, and we are beginning to see a wide selection of accessible tools aimed at companies and developers. But we are still in the early stages of AI adoption, and much work remains in many areas on the tooling front. With that said, many startups, companies, and researchers are hard at work to improve the ecosystems of tools for ML and AI. Over the next 12 months, I expect to see a lot of progress in tools that can ease ML development, governance, and operations.
Powered by WPeMatico
Pretty soon, when a resident of Tijuana, Mexico, calls the police, the first responder might not be a man or woman wearing the Department of Public Safety’s midnight-blue uniform.
It might be a drone. And that could cause some people to worry.
Tijuana, a city of 1.3 million people just south of the U.S.-Mexico border, announced in late May that it had hired California tech firm Cape to help it operate two small quadcopter-style drones from the city’s police headquarters.
If the experiences of other cities are any indication, the Tijuana police-drones could chase fleeing suspects and use their cameras to gather evidence, among other law-enforcement duties.
Most importantly, they could get to the source of a call quicker than a patrol car.
Drones are “a fast way to get eyes on an emergency scene,” Barry Friedman, a law professor at New York University, told The Daily Beast.
In that way, they’re like police helicopters, but cheaper and quicker to deploy. A high-end quadcopter costs just a few thousand dollars to buy, and requires just one trained operator.
“We have seen the benefits of drone use for public safety first-hand, and are extremely proud to be at the forefront of adopting the technology,” Marco Antonio Sotomayor Amezcua, Tijuana’s secretary of public safety, said in a statement to the Association of Unmanned Vehicle Systems International, a drone trade group.
Tijuana’s Unmanned Aerial Vehicles (UAVs) are the first in Mexico. But they likely won’t be the last. More and more police departments across North America, and the world, operate their own law-enforcement drones.
In 2018, Chula Vista, the U.S. city of 270,000 just across the border from Tijuana, teamed up with Cape to integrate quadcopters into the local police force. Since the program’s launch in October 2018, according to Cape, Chula Vista’s UAVs have assisted in 72 arrests.
Once Tijuana has its own drones, the two municipalities will form a sort of international supercity of robotic policing. The Tijuana department of public safety declined to comment for this story.
As cop-drones proliferate, so do critics of the technology. Protesters succeeded in blocking early efforts by departments in Seattle and Los Angeles to deploy UAVs for police work.
Critics fear the erosion of citizens’ privacy. “Drones concern many people as a sort of Big Brother ‘eye in the sky,’” Friedman explained.
“Early adopters of this new technology have discovered a painful truth,” experts warned in a 2016 study commissioned by the U.S. Justice Department. “Where law-enforcement leaders see a wonderful new tool for controlling crime and increasing public safety, a portion of the public sees the potential for a massive invasion of privacy.”
Anti-drone activists also worry that police might arm their UAVs and turn communities into war zones. “In the public mind, the type specimen of unmanned aircraft systems is the military drone, able to hover for days, spying indiscriminately and conducting missile strikes without warning,” the DOJ-sponsored study pointed out.
The Tijuana drones are unarmed. Most American police drones are also unarmed, although North Dakota in 2015 passed a law allowing police in the state to equip UAVs with non-lethal weaponry such as pepper spray or guns firing rubber bullets.
In the United States, arming a law-enforcement drone with lethal weaponry could violate the constitutional ban on the use of military force in domestic policing, the DOJ study noted.
In appears that in practice, police in the United States and Mexico at present aren’t really interested in deploying killer drones. Instead, they want UAVs for their cameras, their speed and ability to fly over traffic and buildings.
“With responding officers able to view the drone’s livestream en route to the scene, they gain full situational awareness that was previously not possible, allowing teams to better plan their approach and preparing them for any potential danger they face upon arrival,” Chris Rittler, the CEO of Cape, told The Daily Beast.
Drones are expendable. In October 2013 in Arlington, Texas, a man shot and killed someone in an apartment parking lot then barricaded himself inside the building, according to a department report the DOJ study cites.
The shooter had a clear view of the area through a window, so the responding cops sent a drone first. “Our SWAT team was able to collect vital intelligence without going into harm’s way,” the department reported.
Rittler told The Daily Beast his company is modeling Tijuana’s police-drone protocols on the system the company set up for neighboring Chula Vista. The drone operators will work from the police headquarters. The two police-’bots will take off from the building’s roof.
In Chula Vista, the police dispatch drones only on “high-priority calls” for a maximum of 10 hours per day, four days a week, Rittler said. In the first few months of Chula Vista’s drone program, the Federal Aviation Administration required the city’s drones at all times to remain within view of their operators, which limited their range to just a few miles.
In early 2019 the FAA granted the Chula Vista police department a waiver to fly drones beyond their operators’ line of sight, greatly expanding the area over which they can operate. Mexico’s own drone regulations require operators to keep their UAVs in view.
Tijuana public-safety secretary Amezcua said his city’s cop-drones could become the model for robotic policing throughout Mexico. “We look forward to seeing the impact the program has on our city and to serving as an example for other agencies across the country,” he stated.
Friedman has some advice for the city as it begins deploying UAVs as first-responders. “The way to do that,” Friedman said, “is with a sound policy for drone operations that limits where and how the drones are used [and] limits the recording time and retention of data to only what is necessary to deal with the emergency.”
Powered by WPeMatico