We are surrounded by systems that make ethical decisions: systems approving loans, trading stocks, forwarding news articles, recommending jail sentences, and much more. They act for us or against us, but almost always without our consent or even our knowledge. In recent articles, I’ve suggested the ethics of artificial intelligence itself needs to be automated. But my suggestion ignores the reality that ethics has already been automated: merely claiming to make data-based recommendations without taking anything else into account is an ethical stance. We need to do better, and the only way to do better is to build ethics into those systems. This is a problematic and troubling position, but I don’t see any alternative.
The problem with data ethics is scale. Scale brings a fundamental change to ethics, and not one that we’re used to taking into account. That’s important, but it’s not the point I’m making here. The sheer number of decisions that need to be made means that we can’t expect humans to make those decisions. Every time data moves from one site to another, from one context to another, from one intent to another, there is an action that requires some kind of ethical decision.
Gmail’s handling of spam is a good example of a program that makes ethical decisions responsibly. We’re all used to spam blocking, and we don’t object to it, at least partly because email would be unusable without it. And blocking spam requires making ethical decisions automatically: deciding that a message is spam means deciding what other people can and can’t say, and who they can say it to.
There’s a lot we can learn from spam filtering. It only works at scale; Google and other large email providers can do a good job of spam filtering because they see a huge volume of email. (Whether this centralization of email is a good thing is another question.) When their servers see an incoming message that matches certain patterns across their inbound email, that message is marked as spam and sorted into recipients’ spam folders. Spam detection happens in the background; we don’t see it. And the automated decisions aren’t final: you can check the spam folder and retrieve messages that were spammed by mistake, and you can mark messages that are misclassified as not-spam.
Credit card fraud detection is another system that makes ethical decisions for us. Most of us have had a credit card transaction rejected and, upon calling the company, found that the card had been cancelled because of a fraudulent transaction. (In my case, a motel room in Oklahoma.) Unfortunately, fraud detection doesn’t work as well as spam detection; years later, when my credit card was repeatedly rejected at a restaurant that I patronized often, the credit card company proved unable to fix the transactions or prevent future rejections. (Other credit cards worked.) I’m glad I didn’t have to pay for someone else’s stay in Oklahoma, but an implementation of ethical principles that can’t be corrected when it makes mistakes is seriously flawed.
So, machines are already making ethical decisions, and often doing so badly. Spam detection is the exception, not the rule. And those decisions have an increasingly powerful effect on our lives. Machines determine what posts we see on Facebook, what videos are recommended to us on YouTube, what products are recommended on Amazon. Why did Google News suddenly start showing me alt-right articles about a conspiracy to deny Cornell University students’ inalienable right to hamburgers? I think I know; I’m a Cornell alum, and Google News “thought” I’d be interested. But I’m just guessing, and I have precious little control over what Google News decides to show me. Does real news exist if Google or Facebook decides to show me burger conspiracies instead? What does “news” even mean if fake conspiracy theories are on the same footing? Likewise, does a product exist if Amazon doesn’t recommend it? Does a song exist if YouTube doesn’t select it for your playlist?
These data flows go both ways. Machines determine who sees our posts, who receives data about our purchases, who finds out what websites we visit. We’re largely unaware of those decisions, except in the most grotesque sense: we read about (some of) them in the news, but we’re still unaware of how they impact our lives.
Don’t misconstrue this as an argument against the flow of data. Data flows, and data becomes more valuable to all of us as a result of those flows. But as Helen Nissenbaum argues in her book Privacy in Context, those flows result in changes in context, and when data changes context, the issues quickly become troublesome. I am fine with medical imagery being sent to a research study where it can be used to train radiologists and the AI systems that assist them. I’m not OK with those same images going to an insurance consortium, where they can become evidence of a “pre-existing condition,” or to a marketing organization that can send me fake diagnoses. I believe fairly deeply in free speech, so I’m not too troubled by the existence of conspiracy theories about Cornell’s dining service; but let those stay in the context of conspiracy theorists. Don’t waste my time or my attention.
I’m also not suggesting that machines make ethical choices in the way humans do: ultimately, humans bear responsibility for the decisions their machines make. Machines only follow instructions, whether those instruction are concrete rules or the arcane computations of a neural network. Humans can’t absolve themselves of responsibility by saying, “The machine did it.” We are the only ethical actors, even when we put tools in place to scale our abilities.
If we’re going to automate ethical decisions, we need to start from some design principles. Spam detection gives us a surprisingly good start. Gmail’s spam detection assists users. It has been designed to happen in the background and not get into the user’s way. That’s a simple but important statement: ethical decisions need to stay out of the user’s way. It’s easy to think that users should be involved with these decisions, but that defeats the point: there are too many decisions, and giving permission each time an email is filed as spam would be much worse than clicking on a cookie notice for every website you visit. But staying out of the user’s way has to be balanced against human responsibility: ambiguous or unclear situations need to be called to the users’ attention. When Gmail can’t decide whether or not a message is spam, it passes it on to the user, possibly with a warning.
A second principle we can draw from spam filtering is that decisions can’t be irrevocable. Emails tagged as spam aren’t deleted for 30 days; at any time during that period, the user can visit the spam folder and say “that’s not spam.” In a conversation, Anna Lauren Hoffmann said it’s less important to make every decision correctly than to have a means of redress by which bad decisions can be corrected. That means of redress must be accessible by everyone, and it needs to be human, even though we know humans are frequently biased and unfair. It must be possible to override machine-made decisions, and moving a message out of the spam folder overrides that decision.
When the model for spam detection is systematically wrong, users can correct it. It’s easy to mark a message as “spam” or “not spam.” This kind of correction might not be appropriate for more complex applications. For example, we wouldn’t want real estate agents “correcting” a model to recommend houses based on race or religion; and we could even discuss whether similar behavior would be appropriate for spam detection. Designing effective means of redress and correction may be difficult, and we’ve only dealt with the simplest cases.
Ethical problems arise when a company’s interest in profit comes before the interests of the users. We see this all the time: in recommendations designed to maximize ad revenue via “engagement”; in recommendations that steer customers to Amazon’s own products, rather than other products on their platform. The customer’s interest must always come before the company’s. That applies to recommendations in a news feed or on a shopping site, but also how the customer’s data is used and where it’s shipped. Facebook believes deeply that “bringing the world closer together” is a social good but, as Mary Gray said on Twitter, when we say that something is a “social good,” we need to ask: “good for whom?” Good for advertisers? Stockholders? Or for the people who are being brought together? The answers aren’t all the same, and depend deeply on who’s connected and how.
Many discussions of ethical problems revolve around privacy. But privacy is only the starting point. Again, Nissenbaum clarifies that the real issue isn’t whether data should be private; it’s what happens when data changes context. None of these privacy tools could have protected the pregnant Target customer who was outed to her parents. The problem wasn’t with privacy technology, but with the intention: to use purchase data to target advertising circulars. How can we control data flows so those flows benefit, rather than harm, the user? “Datasheets for datasets” is a proposal for a standard way to describe data sets; model cards proposes a standard way to describe models. While neither of these is a complete solution, I can imagine a future version of these proposals that standardizes metadata so data routing protocols can determine which flows are appropriate and which aren’t. It’s conceivable that the metadata for data could describe what kinds of uses are allowable (extending the concept of informed consent), and metadata for models could describe how data might be used. That’s work that hasn’t been started, but it’s work that needed.
Whatever solutions we end up with, we must not fall in love with the tools. It’s entirely too easy for technologists to build some tools and think they’ve solved a problem, only to realize the tools have created their own problems. Differential privacy can safeguard personal data by adding random records to a database without changing its statistical properties, but it can also probably protect criminals by hiding evidence. Homomorphic encryption, which allows systems to do computations on encrypted data without first decrypting it, can probably be used to hide the real significance of computations. Thirty years of experience on the internet has taught us that routing protocols can be abused in many ways; protocols that use metadata to route data safely can no doubt be attacked. It’s possible to abuse or to game any solution. That doesn’t mean we shouldn’t build solutions, but we need to build them knowing they aren’t bulletproof, that they’re subject to attack, and that we are ultimately responsible for their behavior.
Our lives are integrated with data in ways our parents could never have predicted. Data transfers have gone way beyond faxing a medical record or two to an insurance company, or authorizing a credit card purchase over an analog phone line. But as Thomas Wolfe wrote, we can’t go home again. There’s no way back to some simpler world where your medical records were stored on paper in your doctor’s office, your purchases were made with cash, and your smartphone didn’t exist. And we wouldn’t want to go back. The benefits of the new data-rich world are immense. Yet, we live in a “data smog” that contains everyone’s purchases, everyone’s medical records, everyone’s location, and even everyone’s heart rate and blood pressure.
It’s time to start building the systems that will truly assist us to manage our data. These machines will need to make ethical decisions, and we will be responsible for those decisions. We can’t avoid that responsibility; we must take it up, difficult and problematic as it is.
Powered by WPeMatico
- Email Newsletters: The New Social Media (NYT) — “With newsletters, we can rebuild all of the direct connections to people we lost when the social web came along.”
- Scientists Rise Up Against Statistical Significance (Nature) — want to replace p-values with confidence intervals, which are easier to interpret without special training. Sample intro to p-values and confidence intervals.
- Cutter — A Qt and C++ GUI for radare2 reverse engineering framework. Its goal is making an advanced, customizable, and FOSS reverse-engineering platform while keeping the user experience in mind. Cutter is created by reverse engineers for reverse engineers.
- Computer Latency at a Human Scale — if a CPU cycle is 1 second, then SSD I/O takes 1.5-4 days, and rotational disk I/O takes 1-9 months. Also in the Hacker News thread, human-scale storage: if a byte is a letter, then a 4kb page of memory is 1 sheet of paper, a 256kb L2 cache is a 64-page binder on the desk, and a 1TB SSD is a warehouse of books.
Powered by WPeMatico
The new COA will increase the Chula Vista Police Department’s range of coverage for drone operations from a three to nearly 40-mile area
By PoliceOne Staff
REDWOOD CITY, Calif. — A California police department is the first public safety organization to be granted FAA authorization for Beyond Visual Line of Sight (BVLOS) operations, significantly increasing the area the agency’s drones can cover.
Cape, a drone telepresence and data management firm, announced Tuesday that the Federal Aviation Administration has granted the first-ever Certificate of Authorization (COA) with a provision for BVLOS for a public safety organization.
Cape says it worked closely with with the Chula Vista Police Department and the FAA to finalize the COA, which has the potential to open doors across the industry.
The department’s drones have conducted more than 300 flights and have contributed to more than 40 arrests.
The new COA, which went into effect on March 15, will increase the total footprint of coverage for emergency response operations from a three to nearly 40-mile area. Under the previous regulation, the police department wasn’t allowed to fly their drones beyond the Pilot-in-Command’s (PIC) line of sight. Now, they will be able to operate drones up to three nautical miles from the PIC, more than 10x the previous coverage area.
“Our team has worked diligently with the FAA to gain approval for this BVLOS provision which is a huge regulatory win and lays future groundwork for the safe expansion of commercial drone integration across industries in the U.S.,” said Chris Rittler, CEO of Cape. “This new COA will help unlock the full potential of the Drone as a First Responder model and is a big step forward for any agency looking to leverage drones to improve the safety of their officers and their community.”
Powered by WPeMatico
When thinking about the world of autonomous drones and their potential (and inevitable) impact, the focus tends to be on the physical world, with everything from burritos to medicine being delivered swiftly via drones. Those are certainly interesting use cases to highlight, but in reality, what drones primarily transport today is data. How effectively that data is captured and how efficiently that data is routed is what is going to determine the impact of drones for the enterprise.
To date, drones have been amazing at collecting data, with the ability to capture millions of pixels and data points on just a single 20-minute flight. However, the utility of that data often won’t be realized for hours, days or even weeks as far less elegant workflows tend to take over once a drone lands. Fumbling with SD cards and the manual process of uploading, downloading, pointing and clicking are the norm as companies work to extract the value of that data. What we’re seeing on the front lines is a change in priorities around data workflows, where companies across industries are investing in integrations to get data moving more seamlessly out of drones and into their systems.
In simple terms, an API (application programming interface) is a way for different programs to talk to one another. Rules can be written and data can be shared at different triggers and events. Not only does an API integration save valuable hours of manual work, but it also helps to increase speed and accuracy with how you run your drone program.
For example, whereas customers of ours used to manually input data from their internal systems into Kittyhawk, that now happens at the press of a button. Furthermore, drone programs that used to rely on their operators to assess airspace and, if need be, request LAANC authorizations for a mission, can now have those checks programmed from headquarters.
Ultimately, what programmatic drone operations enable is for the aircraft to make decisions on their own with minimal, if any, human intervention. Evidenced by large scale deployments, such as in agriculture, DroneDeploy continues to reduce the number of manual steps required to get data off the drone and into the cloud for analysis. Skydio even reduced the amount of human interaction to fly complex and rigorous flight paths with serious onboard computations that it recently extended to its developer partners.
There are a number of stakeholders in the drone industry who can utilize and contribute to the use of programmatic data. In the minds of cellular companies, a drone is yet another device that can have a data plan and connect to AT&T or Verizon towers for immediate data transmission. Of course, drones can also be the source of said data. There is no doubt that 5G opens a lot of opportunities, but companies like Cisco will likely look at fog computing applications to reduce latency and effectiveness of all the data that’s spewing off of drones.
Even the FAA has adopted APIs to automate authorizations for access to controlled airspace near airports. New methods, even for regulators, need to be adopted to handle the scale of drones and their forthcoming ubiquity. Regulators will also play a role in not-too-distant scenarios where flight paths, remote ID and deconfliction will need to happen programmatically, drone to drone.
As exciting as API integrations and new technology can be for each company individually, the true power of programmatic drone operations extends beyond each individual flight to every drone flight occurring in the national airspace. The combination of Networked Remote ID solutions with UTM means that companies will have the ability to fly more. Flights that may seem risky or exceptional today will soon be the norm, and it’s technology like APIs and networked solutions that will help make it possible.
Powered by WPeMatico
If you’ve been worrying that drones would be filling the skies over your head, dropping packages off day after day at your neighbor’s house, leaving food on doorsteps or photographing your every move, you can relax a little. At least for now.
The hype over commercial drones is, so far, largely just that. One of the people who contributed to that hype was Jeff Bezos, the Amazon founder. In a “60 Minutes” interview in December 2013, he predicted that deliveries by drones could become commonplace within five years.
The fifth anniversary of Mr. Bezos’s prediction has come and gone, but widespread deliveries by drone are not yet a reality, neither by Amazon nor by any other company.
Regulatory thickets, technical complexity and the public’s skittishness have proven to be formidable hurdles. At a minimum, the unresolved issues include whether it is safe to allow drones to fly beyond a pilot’s visual line of sight, to operate at night and to fly over people.
But that doesn’t mean there’s likely to be a drone-free future. And maybe there shouldn’t be.
Test programs around the world that use the technology for lifesaving pharmaceuticals as well as for food and even coffee are attempting to prove that delivery by drones is not only safe, but efficient and environmentally sound.
Several companies, including California-based Zipline, which is distributing blood by drone in Rwanda, and Swoop Aero, an Australian company that is dispensing vaccines and other medication on Vanuatu, a nation of volcanic islands in the Pacific, are focused on medical needs.
Others are turning their sights on consumers, hoping drones can be part of the answer to helping small businesses compete with behemoth retailers — or even helping the big guys keep their competitive edge.
Ultimately, says the analyst Colin Snow, whether for sunscreen or sushi, the “big question is whether it makes economic sense to do ‘last mile’ delivery by drone. Some studies say yes, while others say no.”
Chinese aviation administrators, for example, have already approved drone deliveries by the e-commerce giant JD.com and delivery giant SF Holding Co. But in the United States, it will depend on whether regulators eventually allow drone companies to have autonomous systems in which multiple aircraft are overseen by one pilot and whether they can fly beyond the vision of that pilot. Current regulations do not permit multiple drones per operator without a waiver. Operators like Wing, the drone-delivery company owned by Google parent Alphabet, have that capability.
But the immediate economic return isn’t clear yet. According to the chief executive of Wing, James Burgess, “scale doesn’t concern us right now. We strongly believe that eventually we will be able to develop a delivery service for communities that will enable them to transport items in just a few minutes at low cost.”
The company, whose drones can now travel round trip up to 20 kilometers — just over 12 miles — is participating in various stages of testing on three different continents. Its first pilot program is in a suburb of Canberra, Australia, where it is working with local merchants to deliver small packages, including over-the-counter medicine, as well as food. The Australian regulators have issued a permit to allow one pilot to operate up to 20 drones at a time with virtual oversight.
“We’ve tried to keep expectations to a minimum and stayed humble. We didn’t have a lot of preconceived notions,” Mr. Burgess said. The Wing drone is a hybrid that includes, yes, wings for horizontal flying, as well as miniaturized propellers — like a helicopter’s — that allow for hovering over a destination. Somewhat surprisingly, the most popular item ordered in the Australia pilot is coffee, which can be received — still hot — in as little as three minutes from the time the order is placed.
This spring, the company will begin a new trial in Helsinki, for which it is soliciting views as to what should be delivered.
Mr. Burgess also said that, separate from drone tests, the company and others were working on a so-called unmanned traffic management system. Akin to virtual air traffic controllers, the system will be designed to permit multiple aircraft — manned and unmanned — to fly safely in the airspace simultaneously. Wing is also one of several companies participating in a pilot program in Virginia. As with its testing in Finland and Australia, Wing will focus on the delivery of consumer goods, including food.
The Virginia site, in Blacksburg, near Virginia Tech, is one of 10 chosen by the Federal Aviation Administration as part of its Unmanned Aircraft Systems Integration Pilot Program.
The 10 were culled from 149 applications from “state, local and tribal governments,” agency spokesman Les Dorr said in an email. Those in the industry didn’t apply directly, but could show their interest, he said, and more than 2,800 companies responded.
Wing and Uber are two of the companies participating. But Amazon’s Prime Air division is not among those testing its technology. In a statement issued when the 10 locales were announced last May, the company said, “While it’s unfortunate the applications we were involved with were not selected, we support the Administration’s efforts to create a pilot program aimed at keeping America at the forefront of aviation and drone innovation.”
Amazon’s Prime Air is, however, part of a consortium of companies participating in the European Union’s test of drone deliveries in Belgium.
A number of smaller drone companies are involved in testing programs elsewhere. North Carolina has partnered with Silicon Valley-based Matternet and Zipline to deliver essential medical supplies and laboratory samples. In addition, Israeli start-up Flytrex, which is already delivering goods by drone in Reykjavik in partnership with online Icelandic retailer AHA, will focus on food in Holly Springs, N.C., a fast-growing suburb of Raleigh.
Those in the industry, not surprisingly, say that the response from residents has been positive. A Pew Research Center survey in December 2017, however, found that 54 percent of Americans disapprove of drones flying near homes; 11 percent support drones, while 34 percent favor limits on use.
Part of the reluctance, some say, is concern about privacy and sound. As a result, local governments are trying to educate their residents about drone operation. Noise levels are comparable to dishwashers and cars driving nearby, according to a report by Flytrex.
Privacy concerns are in part alleviated by ensuring that drones do not have forward-facing cameras capable of photographing those on the ground.
While the F.A.A. has chosen the 10 pilots, the programs still need to apply for agency waivers because they will fly beyond the visual line of sight, fly at night and fly over people, fundamentals not allowed under current law. The agency is seeking comments on expanding permissible uses under current law; it is also testing to evaluate the parameters of regulation.
As a practical matter, this means that some of the pilot programs are not yet operational as they await F.A.A. approval.
That’s O.K., said James Pearce, a spokesman for the North Carolina Department of Transportation, which prefers to ensure that the drones can safely fly and that those on the ground are not exposed to any risks, including those that are self-inflicted. “We need to make sure that people know not to try to grab the drones.”
The F.A.A. is making quarterly visits, said Aaron Levitt, the assistant director of engineering for Holly Springs, N.C., and a drone enthusiast. He recently spent several days on a site visit with agency representatives as they prepared for the first phase, which will permit 15 restaurants to send orders to a local athletic complex, and planned for a later phase when the drones will fly beyond the line of sight.
While the deliberate pace may seem slow, Mr. Levitt, like others interviewed, remains sanguine. “It’s like the red flag laws when cars began to populate the roads. You had to have someone walking ahead with a flag to warn others. That’s where we are today with drones — not being able to fly beyond the visual line of sight is like not allowing a car to drive faster than a person can walk.”
While the companies, F.A.A. and local governments test the capabilities and limits, there’s another factor that comes into play. Unlike traditional car or truck deliveries, battery-operated drones don’t rely on fossil fuels for their short flights. A 2018 study in the journal Nature found that electric drones were “far more efficient than trucks, vans, larger gasoline drones, and passenger cars,” when comparing for distance traveled. And though the study found that benefits may be reduced once the electricity used for recharging and warehousing was factored in, drones clearly have less environmental impact than a one-item delivery by car.
The environmental benefits are real, Mr. Burgess said.
Or, as Yariv Bash, the chief executive of Flytrex said: “Now, you’ve got a guy driving a one-ton car bringing a half-pound hamburger. It’s crazy.”
Powered by WPeMatico