Beiträge die mit against getaggt sind
The proposed strategy relies on manipulating with high precision an unimaginably huge number of variables
HN Discussion: https://news.ycombinator.com/item?id=18462374
Posted by niccl (karma: 184)
Post stats: Points: 100 - Comments: 71 - 2018-11-15T19:25:06Z
\#HackerNews #against #case #computing #quantum #the
Illustration: Christian Gralingen
Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decades—and without any practical results to show for it.
We’ve been told that quantum computers could “provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence.” We’ve been assured that quantum computers will “forever alter our economic, industrial, academic, and societal landscape.” We’ve even been told that “the encryption that protects the world’s most sensitive data may soon be broken” by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.
Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.
It’s become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world’s top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.
In light of all this, it’s natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, “Not in the foreseeable future.” Having spent decades conducting research in quantum and condensed-matter physics, I’ve developed my very pessimistic view. It’s based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.
The idea of quantum computing first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.
Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy,” he opined. A few years later, Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analog of the universal Turing machine.
The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.
The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, it’s fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computer’s clock.
The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With N transistors, there are 2^N possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their “on” and “off” states, according to a prescribed program.
Illustration: Christian Gralingen Illustration: Christian Gralingen
In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electron’s internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or –1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electron’s spin as ↑ and ↓.
Here’s where things get weird. With the quantum bit, those two states aren’t the only ones possible. That’s because the spin state of an electron is described by a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.
That’s because those two squared magnitudes correspond to the probabilities for the spin of the electron to be in the basic states ↑ and ↓ when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the ↑ state is 0.6 (60 percent), then the probability of finding it in the ↓ state must be 0.4 (40 percent)—nothing else would make sense.
In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its ↑ and ↓ states.
Yes, quantum mechanics often defies intuition. But this concept shouldn’t be couched in such perplexing language. Instead, think of a vector positioned in the x-y plane and canted at 45 degrees to the x-axis. Somebody might say that this vector simultaneously points in both the x- and y-directions. That statement is true in some sense, but it’s not really a useful description. Describing a qubit as being simultaneously in both ↑ and ↓ states is, in my view, similarly unhelpful. And yet, it’s become almost de rigueur for journalists to describe it as such.
In a system with two qubits, there are 2^2 or 4 basic states, which can be written (↑↑), (↑↓), (↓↑), and (↓↓). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of N qubits, the state of the system is described by 2^N complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.
While a conventional computer with N bits at any given moment must be in one of its 2^N possible states, the state of a quantum computer with N qubits is described by the values of the 2^N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.
How is information processed in such a machine? That’s done by applying certain kinds of transformations—dubbed “quantum gates”—that change these parameters in a precise and controlled manner.
Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 2^1,000, which is to say about 10^300. That’s a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.
To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.
At this point in a description of a possible future technology, a hardheaded engineer loses interest. But let’s continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.
In contrast, it’s absolutely unimaginable how to keep errors under control for the 10^300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.
How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machine—which was already more than astronomical with 1,000 qubits—now becomes even more ludicrous.
Even without considering these impossibly large numbers, it’s sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And it’s not like this hasn’t long been a key goal.
In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that “requires on the order of 50 physical qubits” and “exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm….” It’s now the end of 2018, and that ability has still not been demonstrated.
Illustration: Christian Gralingen Illustration: Christian Gralingen
The huge amount of scholarly literature that’s been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.
The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 2^5 = 32, while 2^50 = 1,125,899,906,842,624.
By contrast, the theory of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by “local” noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks, massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of measurements done simultaneously, too.
A decade and a half ago, ARDA’s Experts Panel noted that “it has been established, under certain assumptions, that if a threshold precision per gate operation could be achieved, quantum error correction would allow a quantum computer to compute indefinitely.” Here, the key words are “under certain assumptions.” That panel of distinguished experts did not, however, address the question of whether these assumptions could ever be satisfied.
I argue that they can’t. In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated exactly. That is, no continuously variable quantity can be made to have an exact value, including zero. To a mathematician, this might sound absurd, but this is the unquestionable reality of the world we live in, as any engineer knows.
Sure, discrete quantities, like the number of students in a classroom or the number of transistors in the “on” state, can be known exactly. Not so for quantities that vary continuously. And this fact accounts for the great difference between a conventional digital computer and the hypothetical quantum computer.
Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? Amazingly, not only are there no clear answers to these crucial questions, but they were never even discussed!
While various strategies for building quantum computers are now being explored, an approach that many people consider the most promising, initially undertaken by the Canadian company D-Wave Systems and now being pursued by IBM, Google, Microsoft, and others, is based on using quantum systems of interconnected Josephson junctions cooled to very low temperatures (down to about 10 millikelvins).
The ultimate goal is to create a universal quantum computer, one that can beat conventional computers in factoring large numbers using Shor’s algorithm, performing database searches by a similarly famous quantum-computing algorithm that Lov Grover developed at Bell Laboratories in 1996, and other specialized applications that are suitable for quantum computers.
On the hardware front, advanced research is under way, with a 49-qubit chip (Intel), a 50-qubit chip (IBM), and a 72-qubit chip (Google) having recently been fabricated and studied. The eventual outcome of this activity is not entirely clear, especially because these companies have not revealed the details of their work.
While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, I’m skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10^300 continuously variable parameters defining the quantum state of such a system?
My answer is simple. No, never.
I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. That’s because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. What’s more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and more likely to succeed.
All these problems, as well as a few others I’ve not mentioned here, raise serious doubts about the future of quantum computing. There is a tremendous gap between the rudimentary but very hard experiments that have been carried out with a few qubits and the extremely developed quantum-computing theory, which relies on manipulating thousands to millions of qubits to calculate anything useful. That gap is not likely to be closed anytime soon.
To my mind, quantum computing researchers should still heed an admonition that IBM physicist Rolf Landauer made decades ago when the field heated up for the first time. He urged proponents of quantum computing to include in their publications a disclaimer along these lines: “This scheme, like all other schemes for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work.”
About the Author
Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.
HackerNewsBot debug: Calculated post rank: 90 - Loop: 342 - Rank min: 80 - Author rank: 31
Take with a grain of salt. I. The basic Darwinist tragedy of software engineering is this: Beautiful code gets rewritten; ugly code survives. Just so, generic code is replaced by its concrete…
Article word count: 351
HN Discussion: https://news.ycombinator.com/item?id=18414379
Posted by octosphere (karma: 3091)
Post stats: Points: 111 - Comments: 66 - 2018-11-09T14:06:51Z
\#HackerNews #against #development #software
Take with a grain of salt.
The basic Darwinist tragedy of software engineering is this:
Beautiful code gets rewritten; ugly code survives.
Just so, generic code is replaced by its concrete instances, which are faster and (at first) easier to comprehend.
Just so, extensible code gets extended and shimmed and customized until under its own sheer weight it collapses, then replaced by a monolith that Just Works.
Just so, simple code grows, feature by creeping feature, layer by backward-compatible layer, until it is complicated.
So perishes the good, the beautiful, and the true.
In this world of local-optimum-seeking markets, aesthetics alone keep us from the hell of the Programmer-Archaeologist.
Code is limited primarily by our ability to manage complexity. Thus,
Software grows until it exceeds our capacity to understand it.
Because of this, creating large software systems requires making and enforcing decisions about problems beyond any one personʼs ability to understand. Making collective decisions is the core problem of society, government, and culture. After 14,000 years, we still fuck up a lot. As software eats the world, we should expect our collective decision-making systems to be badly stressed for the foreseeable future.
Perhaps we should expect true advances in software “engineering” only when we learn how better to govern ourselves.
To those who have a choice:
Refuse to work on systems that profit from digital addictions. Refuse to work on systems that centralize control of media. Refuse to work on systems that prop up an unjust status quo. Refuse to work on systems that require unsustainable tradeoffs.
Refuse to work on systems that weaponize the fabric of society.
Above all, refuse to work on systems that understand and manipulate people, but offer no affordance for their subjects to understand and manipulate them.
Work on something that matters, if only to you. Work on something that helps people, even in small ways.
Work on making things understandable.
Once, software let us escape to virtual worlds, choose our own communities, and explore alternate realities. These days, for better or worse, software defines everyoneʼs reality. Letʼs build one worth living in.
HackerNewsBot debug: Calculated post rank: 96 - Loop: 119 - Rank min: 80 - Author rank: 56
Far from expanding Chinese soft power, the Belt and Road Initiative appears to be achieving the opposite.
Article word count: 332
HN Discussion: https://news.ycombinator.com/item?id=18343900
Posted by rumcajz (karma: 3884)
Post stats: Points: 64 - Comments: 56 - 2018-10-31T07:30:40Z
\#HackerNews #against #are #chinese #democracies #influence #turning #why
China’s Belt and Road Initiative (BRI), an enormous international investment project touted by Chinese President Xi Jinping, was supposed to establish Chinese soft power. Since late 2013, Beijing has poured nearly $700 billion worth of Chinese money into more than sixty countries (according to research by RWR Advisory), much of it in the form of large-scale infrastructure projects and loans to governments that would otherwise struggle to pay for them. The idea was to draw these countries closer to Beijing while boosting Chinese soft power abroad.
Today, however, China faces a backlash to BRI at home and abroad. Many Chinese complain of the initiative’s wasteful spending. Internationally, some of the backlash is geopolitical, as countries grow wary of Beijing’s growing influence. But much of it is simply political. Unlike Western lenders, China does not require its partners to meet stringent conditions related to corruption, human rights, or financial sustainability. This no-strings approach to investment has fueled corruption while allowing governments to burden their countries with unpayable debts. And citizens of many BRI countries have reacted with anger toward China—an anger that is now making itself felt in elections. Far from expanding Chinese soft power, the BRI appears to be achieving the opposite.
THE BACKLASH TO BRI
Malaysia’s election in May 2018 crystallized the sorts of concerns about Chinese power that have been building within BRI client countries. Mahathir Mohamad defeated the incumbent Prime Minister,Najib Razak by openly campaigning against Chinese influence. He criticized Razak for approving expensive BRI infrastructure projects that required considerable borrowing from China, which Razak used to create an illusion of development while he and his associates plundered state coffers. Since taking office in May, Mohamad has cancelled two of the largest Chinese projects in Malaysia—a $20 billion railroad and a $2.3 billion natural gas pipeline—citing his country’s inability to pay.
The backlash has not been limited to Malaysia. Pakistan has received an estimated $62 billion in Chinese lending order to finance projects, including highway and rail infrastructure and
HackerNewsBot debug: Calculated post rank: 61 - Loop: 152 - Rank min: 60 - Author rank: 53
#acts #against #cnn #condemn #democrats #doj #fbi #first #firstladymelania #homelandsecurity #justicedepartment #lady #oannewsroom #political #president #presidenttrump #secretservice #trump #violence
posted by pod_feeder
Shifting common knowledge on Saudi Arabia has infected the narrative around SoftBank's Vision Fund, which in turn places unicorn valuations at risk. Read more
Article word count: 1125
HN Discussion: https://news.ycombinator.com/item?id=18271787
Posted by AndrewBissell (karma: 622)
Post stats: Points: 99 - Comments: 84 - 2018-10-22T02:43:09Z
\#HackerNews #against #endangers #funding #pushback #saudi #silicon #valley #valuations
SoftBank CEO Masayoshi Son and Crown Prince MBS in happier times
Can you imagine if Tesla were actually moving forward today with the Saudi sovereign wealth fund in a take-private transaction? Can you imagine the uproar over Elon doing this sort of major deal with the Saudis after the Khashoggi regrettable altercation murder?
Well, no need to imagine. Or at least no need to imagine a unicorn financial transaction caught up in the wake of the Khashoggi events.
SoftBank Group Corp. is in discussions to take a majority stake in WeWork Cos., in what would be a giant bet on the eight-year-old provider of shared office space, according to people familiar with the talks. The investment could total between $15 billion and $20 billion and would likely come from SoftBank’s Vision Fund, some of the people said. The $92 billion Vision Fund, which is backed largely by Saudi Arabia and Abu Dhabi wealth funds as well as by SoftBank, already owns nearly 20% of WeWork after last year committing $4.4 billion in equity funding at a $20 billion valuation. Talks are fluid and there is no guarantee there will be a deal, some of the people said. “SoftBank Explores Taking Majority Stake in WeWork,” Wall Street Journal, October 9, 2018
Softbank’s Vision Fund is the largest single private equity fund in the world, with about $100 billion in capital commitments, of which about half comes from Saudi Arabia. Over the past two years, the Vision Fund has transformed Silicon Valley, particularly in the relationship between capital markets and highly valued private tech companies – the so-called unicorns like Uber and Lyft and Palantir and Airbnb. Who needs an IPO for an exit when you’ve got the Vision Fund to write a multi-billion dollar check?
Case in point: the deal that was shadow-announced earlier this month between the Vision Fund and WeWork, a company that SoftBank valued at $20 billion last year despite, ummm, shall we say … questionable business fundamentals to support that number and a subsequent bond raise. I mean, can anyone say “community-adjusted EBITDA” with a straight face? But hey, that was 12 months ago! What do you say we literally double down on that valuation and buy out all of the external investors in WeWork, so that it’s just the Vision Fund and WeWork management that owns the company? How does that work for you?
OMG. If I’m one of those current private equity investors in WeWork, I am building a shrine in honor of Masayoshi Son, the SoftBank founder and Vision Fund frontman. If I am an investor or an employee of any of these other unicorn tech companies, I am lighting a candle and praying for Masayoshi Son’s continued good health.
The Vision Fund, and more generally the Saudi money behind it, is a classic fin de siecle undertaking. It is The Greatest Fool in a private equity world that must find greater and greater fools for their investment funds to work here at the tail end of a very long and very profitable business cycle. The Vision Fund and its Saudi money isn’t just a lucky break for both the financiers and the entrepreneurs of Silicon Valley. It is an answered prayer.
And here’s the crazy thing … the Khashoggi murder could blow this all up. Not just the WeWork deal. Not just the next mega-fund that SoftBank puts together. But this fund. The Vision Fund.
And if the Vision Fund is no longer viable as a player in Silicon Valley, then I don’t think the unicorn valuations are viable, either.
Why do I think that there is now existential risk for the Vision Fund? Check out these narrative maps before and after news of the Khashoggi murder broke on October 3.
First here’s the narrative map of the 608 unique major-media articles on “SoftBank Vision Fund” for the three months prior to the murder, so July 2 through October 2, 2018. I’ve colored the nodes (each node is a separate article) by sentiment, so green for positive, yellow for neutral, and red for negative.
As you can see, the core of the Vision Fund narrative is all about the deals it is doing. The Saudi connection is way off in the periphery of the overall narrative. Moreover, the sentiment across the map, including the peripheral Saudi thread, is VERY positive. Only 5% of these articles have a negative sentiment, and those are dominated by a very peripheral cluster of articles on microprocessor IP, stemming from SoftBank’s acquisition of ARM in 2016.
But now look at the narrative map since October 3, consisting of 225 unique major-media articles on the Vision Fund.
This is a narrative train wreck. It’s not just that the negative sentiment articles have more than tripled to 18%, and that positive sentiment articles are now less than half of the total (which is AWFUL for the normally rah-rah business press). No, the much more damaging aspect is that Saudi involvement is now at the core of the Vision Fund narrative. There are still more articles being published about the investments that the Vision Fund is making. But that narrative cluster is no longer at the heart of the map. The Vision Fund narrative is now defined by its Saudi funding, and that’s a bell that never gets unrung.
I wrote a brief note last week about how common knowledge regarding the Saudi regime in general and Crown Prince MBS in particular had shifted, about how what everyone knows that everyone knows about MBS had changed. And once common knowledge changes, so does behavior. In many cases, it’s the ONLY thing that can change behaviors.
Well, the common knowledge on SoftBank and the Vision Fund has changed, too. Today, everyone knows that everyone knows that it’s Saudi money behind the fund. And that will absolutely change Silicon Valley’s behavior vis-a-vis the Vision Fund, even if it changes nothing in what Silicon Valley already knew.
Will greed and the answered prayer of The Greatest Fool overcome the narrative stain that associating with the Vision Fund now brings? Maybe. I’d never want to bet against greed! But even more so, I wouldn’t want to bet against the power of narrative.
Bottom line: I think that the MBS-is-a-Bond-villain narrative is now a significant risk to unicorn tech company valuations, through the intermediating narrative of SoftBank’s Vision Fund.
PS – I’d like to give a major h/t to our friends at Landmark Partners for suggesting that we take a look at SoftBank through the lens of the Narrative Machine. Rusty and I are so fortunate to have found fellow truth-seekers throughout the financial services world. Please keep those cards and letters coming (email@example.com) with any ideas on future notes!
HackerNewsBot debug: Calculated post rank: 94 - Loop: 214 - Rank min: 80 - Author rank: 44
President Trump blasts Obama for failing to secure 2016 presidential election against foreign hacking
#2016 #against #blasts #election #electionsecurity #failing #foreign #hacking #oannewsroom #obama #obamaera #president #presidential #presidentobama #presidenttrump #secure #trump
posted by pod_feeder
The era where we were in control of the data on our own computers has been replaced with devices containing sensors we cannot control, storing data we cannot access, in operating systems we cannot…
Article word count: 548
HN Discussion: https://news.ycombinator.com/item?id=18209045
Posted by crunchiebones (karma: 116)
Post stats: Points: 112 - Comments: 43 - 2018-10-13T17:34:54Z
\#HackerNews #against #being #data #invisible #manipulation #our #used #ways
The era where we were in control of the data on our own computers has been replaced with devices containing sensors we cannot control, storing data we cannot access, in operating systems we cannot monitor, in environments where our rights are rendered meaningless. Soon the default will shift from us interacting directly with our devices to interacting with devices we have no control over and no knowledge that we are generating data. Below we outline 10 ways in which this exploitation and manipulation is already happening.
1. Fintech and the Financial Exploitation of Customer Data
Financial services are collecting and exploiting increasing amounts of data about our behaviour, interests, networks, and personalities to make financial judgements about us, like our creditworthiness.
2. Profiling and Elections — How Political Campaigns Know Our Deepest Secrets
Political campaigns around the world have turned into sophisticated data operations.
3. Connected Cars and the Future of Car Travel
As society heads toward an ever more connected world, the ability for individuals to protect and manage the invisible data that companies and third parties hold about them, becomes increasingly difficult. This is further complicated by events like data breaches, hacks, and covert information gathering techniques, which are hard, if not impossible, to consent to.
4. The Myth of Device Control and the Reality of Data Exploitation
Our connected devices carry and communicate vast amounts of personal information, both visible and invisible.
5. Super-Apps and the Exploitative Potential of Mobile Applications
For those concerned by reporting of Facebook’s exploitation of user data to generate sensitive insights into its users, it is worth taking note of WeChat, a Chinese super-app whose success has made it the envy of Western technology giants, including Facebook. WeChat has more than 900 million users. It serves as a portal for nearly every variety of connected activity in China.
6. Smart Cities and Our Brave New World
Cities around the world are deploying collecting increasing amounts of data and the public is not part of deciding if and how such systems are deployed.
7. The Myth of Free Wi-Fi
Many technologies, including those that are critical to our day-to-day lives do not protect our privacy or security. One reason for this is that the standards which govern our modern internet infrastructure do not prioritise security which is imperative to protect privacy.
8. Invisible Discrimination and Poverty
Online, and increasingly offline, companies gather data about us that determine what advertisements we see; this, in turn, affects the opportunities in our lives. The ads we see online, whether we are invited for a job interview, or whether we qualify for benefits is decided by opaque systems that rely on highly granular data. More often than not, such exploitation of data facilitates and exacerbates already existing inequalities in societies — without us knowing that it occurs. As a result, data exploitation disproportionately affects the poorest and most vulnerable in society.
9. Data and Policing - Your Tweet Can and Will Be Used Against You
Police and security services are increasingly outsourcing intelligence collection to third-party companies which are assigning threat scores and making predictions about who we are.
10. The Gig Economy and Exploitation
Gig economy jobs that depend on mobile applications allow workers’ movements to be monitored, evaluated, and exploited by their employers.
HackerNewsBot debug: Calculated post rank: 89 - Loop: 125 - Rank min: 80 - Author rank: 26
Amazon.com Inc's machine-learning specialists uncovered a big problem: thei...
Article word count: 1131
HN Discussion: https://news.ycombinator.com/item?id=18184697
Posted by wyldfire (karma: 9324)
Post stats: Points: 116 - Comments: 119 - 2018-10-10T13:38:29Z
\#HackerNews #against #amazon #bias #recruiting #scraps #secret #showed #that #tool #women
SAN FRANCISCO (Reuters) - Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.
The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters.
Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon, some of the people said.
“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”
But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.
Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.
The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity. Amazon’s recruiters looked at the recommendations generated by the tool when searching for new hires, but never relied solely on those rankings, they said.
Amazon declined to comment on the recruiting engine or its challenges, but the company says it is committed to workplace diversity and equality.
The company’s experiment, which Reuters is first to report, offers a case study in the limitations of machine learning. It also serves as a lesson to the growing list of large companies including Hilton Worldwide Holdings Inc (HLT.N) and Goldman Sachs Group Inc (GS.N) that are looking to automate portions of the hiring process.
Some 55 percent of U.S. human resources managers said artificial intelligence, or AI, would be a regular part of their work within the next five years, according to a 2017 survey by talent software firm CareerBuilder.
Employers have long dreamed of harnessing technology to widen the hiring net and reduce reliance on subjective opinions of human recruiters. But computer scientists such as Nihar Shah, who teaches machine learning at Carnegie Mellon University, say there is still much work to do.
“How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable - that’s still quite far off,” he said.
Slideshow (6 Images)
Amazon’s experiment began at a pivotal moment for the world’s largest online retailer. Machine learning was gaining traction in the technology world, thanks to a surge in low-cost computing power. And Amazon’s Human Resources department was about to embark on a hiring spree: Since June 2015, the company’s global headcount has more than tripled to 575,700 workers, regulatory filings show.
So it set up a team in Amazon’s Edinburgh engineering hub that grew to around a dozen people. Their goal was to develop AI that could rapidly crawl the web and spot candidates worth recruiting, the people familiar with the matter said.
The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.
Instead, the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured,” one person said.
Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.
THE PROBLEM, OR THE CURE?
Other companies are forging ahead, underscoring the eagerness of employers to harness AI for hiring.
Kevin Parker, chief executive of HireVue, a startup near Salt Lake City, said automation is helping firms look beyond the same recruiting networks upon which they have long relied. His firm analyzes candidates’ speech and facial expressions in video interviews to reduce reliance on resumes.
“You weren’t going back to the same old places; you weren’t going back to just Ivy League schools,” Parker said. His company’s customers include Unilever PLC (ULVR.L) and Hilton.
Goldman Sachs has created its own resume analysis tool that tries to match candidates with the division where they would be the “best fit,” the company said.
Microsoft Corp’s (MSFT.O) LinkedIn, the world’s largest professional network, has gone further. It offers employers algorithmic rankings of candidates based on their fit for job postings on its site.
Still, John Jersin, vice president of LinkedIn Talent Solutions, said the service is not a replacement for traditional recruiters.
“I certainly would not trust any AI system today to make a hiring decision on its own,” he said. “The technology is just not ready yet.”
Some activists say they are concerned about transparency in AI. The American Civil Liberties Union is currently challenging a law that allows criminal prosecution of researchers and journalists who test hiring websites’ algorithms for discrimination.
“We are increasingly focusing on algorithmic fairness as an issue,” said Rachel Goodman, a staff attorney with the Racial Justice Program at the ACLU.
Still, Goodman and other critics of AI acknowledged it could be exceedingly difficult to sue an employer over automated hiring: Job candidates might never know it was being used.
As for Amazon, the company managed to salvage some of what it learned from its failed AI experiment. It now uses a “much-watered down version” of the recruiting engine to help with some rudimentary chores, including culling duplicate candidate profiles from databases, one of the people familiar with the project said.
Another said a new team in Edinburgh has been formed to give automated employment screening another try, this time with a focus on diversity.
Reporting By Jeffrey Dastin in San Francisco; Editing by Jonathan Weber and Marla Dickerson
Our Standards:The Thomson Reuters Trust Principles.
HackerNewsBot debug: Calculated post rank: 117 - Loop: 155 - Rank min: 100 - Author rank: 36
#against #bid #brianrenfroe #chicago #columbusday #efficiency #house #nationalassociationoflettercarriers #oannewsroom #postal #postalservice #presidenttrump #protest #trumpadministration #uspostalservice #usps #white #workers
posted by pod_feeder
#against #blames #booker #brettkavanaugh #cory #corybooker #family #gop #harassment #kelleypaul #oannewsroom #paul #rand #randpaul #sen #senatorcorybooker #senatorrandpaulcategorycategorycdatasupremecourt #threats #wife
posted by pod_feeder
Don’t become your own pharaoh: The Sabbath is the most radical commandment in a time of total work
Article word count: 1274
HN Discussion: https://news.ycombinator.com/item?id=17989868
Posted by BobbyVsTheDevil (karma: 830)
Post stats: Points: 119 - Comments: 120 - 2018-09-14T18:58:04Z
\#HackerNews #act #against #back #bring #lets #sabbath #the #total #work
As a boy in late-1940s Memphis, my dad got a nickel every Friday evening to come by the home of a Russian Jewish immigrant named Harry Levenson and turn on his lights, since the Torah forbids lighting a fire in your home on the Sabbath. My father would wonder, however, if he were somehow sinning. The fourth commandment says that on the Sabbath ‘you shall not do any work – you, your son or your daughter, your male or female slave, your livestock, or the alien resident in your towns’. Was my dad Levenson’s slave? If so, how come he could turn on Levenson’s lights? Were they both going to hell?
‘Remember the Sabbath day, and keep it holy.’ The commandment smacks of obsolete puritanism – the shuttered liquor store, the cheque sitting in a darkened post office. We usually encounter the Sabbath as an inconvenience, or at best a nice idea increasingly at odds with reality. But observing this weekly day of rest can actually be a radical act. Indeed, what makes it so obsolete and impractical is precisely what makes it so dangerous.
When taken seriously, the Sabbath has the power to restructure not only the calendar but also the entire political economy. In place of an economy built upon the profit motive – the ever-present need for more, in fact the need for there to never be enough – the Sabbath puts forward an economy built upon the belief that there is enough. But few who observe the Sabbath are willing to consider its full implications, and therefore few who do not observe it have reason to find any value in it.
The Sabbath’s radicalism should be no surprise given the fact that it originated among a community of former slaves. The 10 commandments constituted a manifesto against the regime that they had recently escaped, and rebellion against that regime was at the heart of their god’s identity, as attested to in the first commandment: ‘I am the Lord your God, who brought you out of the land of Egypt, out of the house of slavery.’ When the ancient Israelites swore to worship only one god, they understood this to mean, in part, they owed no fealty to the pharaoh or any other emperor.
It is therefore instructive to read the fourth commandment in light of the pharaoh’s labour practices described earlier in the book of Exodus. He is depicted as a manager never satisfied with his slaves, especially those building the structures for storing surplus grain. The pharaoh orders that the slaves no longer be given straw with which to make bricks; they must now gather their own straw, while the daily quota for bricks would remain the same. When many fail to meet their quota, the pharaoh has them beaten and calls them lazy.
The fourth commandment presents a god who, rather than demanding ever more work, insists on rest. The weekly Sabbath placed a hard limit on how much work could be done and suggested that this was perfectly all right; enough work was done in the other six days. And whereas the pharaoh relaxed while his people toiled, Yahweh insisted that the people rest as Yahweh rested: ‘For in six days the Lord made heaven and earth, the sea, and all that is in them, but rested the seventh day; therefore the Lord blessed the Sabbath day and consecrated it.’
The Sabbath, as described in Exodus and other passages in the Torah, had a democratising effect. Yahweh’s example – not forcing others to labour while Yahweh rested – was one anybody in power was to imitate. It was not enough for you to rest; your children, slaves, livestock and even the ‘aliens’ in your towns were to rest as well. The Sabbath wasn’t just a time for personal reflection and rejuvenation. It wasn’t self-care. It was for everyone.
There was a reason the fourth commandment came where it did, bridging the commandments on how humans should relate to God with the commandments on how humans should relate to one another. As the Old Testament scholar Walter Brueggemann points out in his book Sabbath as Resistance (2014), a pharaonic economy driven by anxiety begets violence, dishonesty, jealousy, theft, the commodification of sex and familial alienation. None of these had a place in the Torahic economy, which was driven not by anxiety but by wholeness, enoughness. In such a society, there was no need to murder, covet, lie, commit adultery or dishonour one’s parents.
The Sabbath’s centrality to the Torahic economy was made clearer in other laws building upon the fourth commandment. Every seventh year, the Israelites were to let their fields ‘rest and lie fallow, so that the poor of your people may eat; and what they leave the wild animals may eat’. And every 50th year, they were to not only let their fields lie fallow, but forgive all debts; all slaves were to be freed and returned to their families, and all land returned to its original inhabitants. This was a far cry from the pharaonic regime where surplus grain was hoarded and parsed out to the poor only in exchange for work and loyalty. There were no strings attached; the goal wasn’t accumulating power but reconciling the community.
It is unknown if these radical commandments were ever followed to the letter. In any case, they are certainly not now. The Sabbath was desacralised into the weekend, and this desacralisation paved the way for the disappearance of the weekend altogether. The decline of good full-time work and the rise of the gig economy mean that we must relentlessly hustle and never rest. Why haven’t you answered that email? Couldn’t you be doing something more productive with your time? Bring your phone with you to the bathroom so you can at least keep busy.
We are expected to compete with each other for our own labour, so that we each become our own taskmaster, our own pharaoh. Offer your employer more and more work for the same amount of pay, so that you undercut your competition – more and more bricks, and you’ll even bring your own straw.
In our neo-pharaonic economy, we are worth no more than the labour we can perform, and the value of our labour is being ever devalued. We can never work enough. A profit-driven capitalist society depends on the anxious striving for more, and it would break down if there were ever enough.
The Sabbath has no place in such a society and indeed upends its most basic tenets. In a Sabbatarian economy, the right to rest – the right to do nothing of value to capital – is as holy as the right to work. We can give freely to the poor and open our homes to refugees without being worried that there will be nothing left for us. We can erase all debts from our records, because it is necessary for the community to be whole.
It is time for us, whatever our religious beliefs, to see the Sabbatarian laws of old not as backward and pharisaical, but rather as the liberatory statements they were meant to be. It is time to ask what our society would look like if it made room for a new Sabbath – or, to put it a different way, what our society would need to look like for the Sabbath to be possible.
This Idea was made possible through the support of a grant from the Templeton Religion Trust to Aeon. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Templeton Religion Trust.
Funders to Aeon Magazine are not involved in editorial decision-making, including commissioning or content-approval.
HackerNewsBot debug: Calculated post rank: 119 - Loop: 107 - Rank min: 100 - Author rank: 188
Brave, a privacy-focused web browser set up by Silicon Valley engineering guru Brendan Eich, filed privacy complaints in Britain and Ireland that could become a test case against search company Google…
Article word count: 633
HN Discussion: https://news.ycombinator.com/item?id=17970567
Posted by petethomas (karma: 23247)
Post stats: Points: 338 - Comments: 175 - 2018-09-12T16:41:53Z
\#HackerNews #adtech #against #brave #complaint #files #google
COLOGNE, Germany (Reuters) - Brave, a privacy-focused web browser set up by Silicon Valley engineering guru Brendan Eich, filed privacy complaints in Britain and Ireland that could become a test case against search company Google and other digital advertising firms.
The petitioners say they want to trigger an article in the new European General Data Protection Regulation (GDPR) requiring an EU-wide investigation, making it a test case for a new European Data Protection Board created to give the privacy regime more teeth.
The GDPR seeks to ensure that individuals have greater control over the data that companies hold about them. Brave and the co-plaintiffs say Google and others are playing fast and loose with people’s data.
“There is a massive and systematic data breach at the heart of the behavioral advertising industry. Despite the two-year lead-in period before the GDPR, adtech companies have failed to comply,” Brave’s chief policy officer Johnny Ryan told Reuters.
The complaint argues that when a person visits a website, intimate personal data that describe them and what they are doing online is broadcast to tens or hundreds of companies without their knowledge in order to auction and place ads.
This, it says in the complaints filed on Wednesday, violates the GDPR’s requirement for personal data to be processed in a way that ensures they are properly secured, including against unauthorized or unlawful processing and against accidental loss.
Google says it is has already implemented strong privacy protections in consultation with European regulators and is committed to complying with the GDPR.
Brave operates as a private browser and ad blocker, preventing the use of trackers on web pages to harvest data about people’s online behavior - giving it detailed insight into the inner workings of the online ad industry.
The new data privacy law could also have a huge impact on the small army of tech firms that comes between giants like Google and its users to harvest and crunch data from websites to form very specific consumer profiles.
Were the regulator to find in favor of the plaintiffs, that could undermine the foundations of the data-driven model on with the online ad industry - forecast by research firm eMarketer to grow to $273 billion this year - depends.
The GDPR is the first data privacy regime that foresees heavy fines for serious violations - of up to 4 percent of a company’s global turnover.
The filing, on behalf of Ryan, Jim Killock of the Open Rights Group, a non-profit organization, and academic Michael Veale of University College London, coincided with the opening of a major digital marketing fair in Cologne, Germany.
A copy of the complaint seen by Reuters argues that Google and the adtech industry commit “wide-scale and systematic breaches of the data protection regime” through the way they place personalized online ads.
This happens through a process called “real-time bidding” and does so through two main channels. One, called OpenRTB, is used by most players in the industry, while a second, called Authorised Buyers, is run by Google.
The complaint argues that these gather and broadcast more personal data than could be justified for advertising purposes; that this data is then subject to further unauthorized processing; and that it can include sensitive information such as sexuality, ethnicity or political opinions.
Ravi Naik, a partner at ITN Solicitors in London who is representing the plaintiffs, said this case addressed a long-standing data-protection concern that “is likely to have far reaching and dramatic consequences, which may change our fundamental relationship with the Internet”.
Reporting by Douglas Busvine; Editing by Georgina Prodhan and David Evans
Our Standards:The Thomson Reuters Trust Principles.
HackerNewsBot debug: Calculated post rank: 283 - Loop: 313 - Rank min: 100 - Author rank: 82