Showing posts with label computationalism. Show all posts
Showing posts with label computationalism. Show all posts

Thursday, August 31, 2023

Project Stargate Held That The Universe Is A Projection Of A Lower Dimensional Reality

nature  |  At the time, reversible computing was widely considered impossible. A conventional digital computer is assembled from an array of logic gates — ANDs, ORs, XORs and so on — in which, generally, two inputs become one output. The input information is erased, producing heat, and the process cannot be reversed. With Margolus and a young Italian electrical engineer, Tommaso Toffoli, Fredkin showed that certain gates with three inputs and three outputs — what became known as Fredkin and Toffoli gates — could be arranged such that all the intermediate steps of any possible computation could be preserved, allowing the process to be reversed on completion. As they set out in a seminal 1982 paper, a computer built with those gates might, theoretically at least, produce no waste heat and thus consume no energy1.

This seemed initially no more than a curiosity. Fredkin felt that the concept might help in the development of more efficient computers with less wasted heat, but there was no practical way to realize the idea fully using classical computers. In 1981, however, history took a new turn, when Fredkin and Toffoli organized the Physics of Computation Symposium at MIT. Feynman was among the luminaries present. In a now famous contribution, he suggested that, rather than trying to simulate quantum phenomena with conventional digital computers, some physical systems that exhibit quantum behaviour might be better tools.

This talk is widely seen as ushering in the age of quantum computers, which harness the full power of quantum mechanics to solve certain problems — such as the quantum-simulation problem that Feynman was addressing — much faster than any classical computer can. Four decades on, small quantum computers are now in development. The electronics, lasers and cooling systems needed to make them work consume a lot of power, but the quantum logical operations themselves are pretty much lossless.

Digital physics

Reversible computation “was an essential precondition really, for being able to conceive of quantum computers”, says Seth Lloyd, a mechanical engineer at MIT who in 1993 developed what is considered the first realizable concept for a quantum computer2. Although the IBM physicist Charles Bennett had also produced models of a reversible computation, Lloyd adds, it was the zero-dissipation versions described by Fredkin, Toffoli and Margolus that ended up becoming the models on which quantum computation were built.

For the cosmos to have been produced by a system of data bits at the tiny Planck scale — a scale at which present theories of physics are expected to break down — space and time must be made up of discrete, quantized entities. The effect of such a granular space-time might show up in tiny differences, for example, in how long it takes light of various frequencies to propagate across billions of light years. Really pinning down the idea, however, would probably require a quantum theory of gravity that establishes the relationship between the effects of Einstein’s general theory of relativity at the macro scale and quantum effects on the micro scale. This has so far eluded theorists. Here, the digital universe might just help itself out. Favoured routes towards quantum theories of gravitation are gradually starting to look more computational in nature, says Lloyd — for example the holographic principle introduced by ‘t Hooft, which holds that our world is a projection of a lower-dimensional reality. “It seems hopeful that these quantum digital universe ideas might be able to shed some light on some of these mysteries,” says Lloyd.

That would be just the latest twist in an unconventional story. Fredkin himself thought that his lack of a typical education in physics was, in part, what enabled him to arrive at his distinctive views on the subject. Lloyd tends to agree. “I think if he had had a more conventional education, if he’d come up through the ranks and had taken the standard physics courses and so on, maybe he would have done less interesting work.”

 

The Cellular Automaton Interpretation of Quantum Mechanics

springer  |  This book presents the deterministic view of quantum mechanics developed by Nobel Laureate Gerard 't Hooft.

Dissatisfied with the uncomfortable gaps in the way conventional quantum mechanics meshes with the classical world, 't Hooft has revived the old hidden variable ideas, but now in a much more systematic way than usual. In this, quantum mechanics is viewed as a tool rather than a theory.

The author gives examples of models that are classical in essence, but can be analysed by the use of quantum techniques, and argues that even the Standard Model, together with gravitational interactions, might be viewed as a quantum mechanical approach to analysing a system that could be classical at its core. He shows how this approach, even though it is based on hidden variables, can be plausibly reconciled with Bell's theorem, and how the usual objections voiced against the idea of ‘superdeterminism' can be overcome, at least in principle.

This framework elegantly explains - and automatically cures - the problems of the wave function collapse and the measurement problem. Even the existence of an “arrow of time" can perhaps be explained in a more elegant way than usual. As well as reviewing the author’s earlier work in the field, the book also contains many new observations and calculations. It provides stimulating reading for all physicists working on the foundations of quantum theory.

Monday, June 05, 2023

Try Fitting Assembly/Constructor Theory Over Twistor Space

quantamagazine  |  Assembly theory started when Cronin asked why, given the astronomical number of ways to combine different atoms, nature makes some molecules and not others. It’s one thing to say that an object is possible according to the laws of physics; it’s another to say there’s an actual pathway for making it from its component parts. “Assembly theory was developed to capture my intuition that complex molecules can’t just emerge into existence because the combinatorial space is too vast,” Cronin said.

“We live in a recursively structured universe,” Walker said. “Most structure has to be built on memory of the past. The information is built up over time.”

Assembly theory makes the seemingly uncontroversial assumption that complex objects arise from combining many simpler objects. The theory says it’s possible to objectively measure an object’s complexity by considering how it got made. That’s done by calculating the minimum number of steps needed to make the object from its ingredients, which is quantified as the assembly index (AI).

In addition, for a complex object to be scientifically interesting, there has to be a lot of it. Very complex things can arise from random assembly processes — for example, you can make proteinlike molecules by linking any old amino acids into chains. In general, though, these random molecules won’t do anything of interest, such as behaving like an enzyme. And the chances of getting two identical molecules in this way are vanishingly small.

Functional enzymes, however, are made reliably again and again in biology, because they are assembled not at random but from genetic instructions that are inherited across generations. So while finding a single, highly complex molecule doesn’t tell you anything about how it was made, finding many identical complex molecules is improbable unless some orchestrated process — perhaps life — is at work.

Assembly theory predicts that objects like us can’t arise in isolation — that some complex objects can only occur in conjunction with others. This makes intuitive sense; the universe could never produce just a single human. To make any humans at all, it had to make a whole bunch of us.

In accounting for specific, actual entities like humans in general (and you and me in particular), traditional physics is only of so much use. It provides the laws of nature, and assumes that specific outcomes are the result of specific initial conditions. In this view, we must have been somehow encoded in the first moments of the universe. But it surely requires extremely fine-tuned initial conditions to make Homo sapiens (let alone you) inevitable.

Assembly theory, its advocates say, escapes from that kind of overdetermined picture. Here, the initial conditions don’t matter much. Rather, the information needed to make specific objects like us wasn’t there at the outset but accumulates in the unfolding process of cosmic evolution — it frees us from having to place all that responsibility on an impossibly fine-tuned Big Bang. The information “is in the path,” Walker said, “not the initial conditions.”

Cronin and Walker aren’t the only scientists attempting to explain how the keys to observed reality might not lie in universal laws but in the ways that some objects are assembled or transformed into others. The theoretical physicist Chiara Marletto of the University of Oxford is developing a similar idea with the physicist David Deutsch. Their approach, which they call constructor theory and which Marletto considers “close in spirit” to assembly theory, considers which types of transformations are and are not possible.

“Constructor theory talks about the universe of tasks able to make certain transformations,” Cronin said. “It can be thought of as bounding what can happen within the laws of physics.” Assembly theory, he says, adds time and history into that equation.

To explain why some objects get made but others don’t, assembly theory identifies a nested hierarchy of four distinct “universes.”

In the Assembly Universe, all permutations of the basic building blocks are allowed. In the Assembly Possible, the laws of physics constrain these combinations, so only some objects are feasible. The Assembly Contingent then prunes the vast array of physically allowed objects by picking out those that can actually be assembled along possible paths. The fourth universe is the Assembly Observed, which includes just those assembly processes that have generated the specific objects we actually see.

Merrill Sherman/Quanta Magazine; source: https://doi.org/10.48550/arXiv.2206.02279

Assembly theory explores the structure of all these universes, using ideas taken from the mathematical study of graphs, or networks of interlinked nodes. It is “an objects-first theory,” Walker said, where “the things [in the theory] are the objects that are actually made, not their components.”

To understand how assembly processes operate within these notional universes, consider the problem of Darwinian evolution. Conventionally, evolution is something that “just happened” once replicating molecules arose by chance — a view that risks being a tautology, because it seems to say that evolution started once evolvable molecules existed. Instead, advocates of both assembly and constructor theory are seeking “a quantitative understanding of evolution rooted in physics,” Marletto said.

According to assembly theory, before Darwinian evolution can proceed, something has to select for multiple copies of high-AI objects from the Assembly Possible. Chemistry alone, Cronin said, might be capable of that — by narrowing down relatively complex molecules to a small subset. Ordinary chemical reactions already “select” certain products out of all the possible permutations because they have faster reaction rates.

The specific conditions in the prebiotic environment, such as temperature or catalytic mineral surfaces, could thus have begun winnowing the pool of life’s molecular precursors from among those in the Assembly Possible. According to assembly theory, these prebiotic preferences will be “remembered” in today’s biological molecules: They encode their own history. Once Darwinian selection took over, it favored those objects that were better able to replicate themselves. In the process, this encoding of history became stronger still. That’s precisely why scientists can use the molecular structures of proteins and DNA to make deductions about the evolutionary relationships of organisms.

Thus, assembly theory “provides a framework to unify descriptions of selection across physics and biology,” Cronin, Walker and colleagues wrote. “The ‘more assembled’ an object is, the more selection is required for it to come into existence.”

“We’re trying to make a theory that explains how life arises from chemistry,” Cronin said, “and doing it in a rigorous, empirically verifiable way.”

 

Wednesday, December 30, 2020

Computational Statistics (So-Called AI) Is Inherently Biased

FT |  Baysean statistical models, (so-called AI) inherently amplify bias of whatever data set they have been modeled on.  Moreover, this amplification is exponential, meaning the more you use AI, the more biased it will get via self learning. Since it is impossible to eliminate  bias completely in a training dataset, any AI system will eventually become extremely biased. Self correcting mechanisms suffer from the same problem since they too are AI based, you end up with a system that is unstable and will always eventually become extremely biased based on even minute impossible to eradicate biases in its initial data set. 

“F*** the algorithm!” became one of the catchphrases of 2020, encapsulating the fear that humanity is being subordinated to technology. Whether it was British school students complaining about their A level grades or Stanford Medical Centre staff highlighting the unfairness of vaccination priorities, people understandably rail against the idea of faceless machines stripping humans of agency. 

This is an issue that will only grow in prominence as artificial intelligence becomes ubiquitous in the computer systems that power our modern world. To some extent, these fears are based on a misconception. Humans are still the ones who exercise judgment and algorithms do exactly what they are designed to do: discriminate. Whether they do so in a positive or a negative way depends on the humans who write these algorithms and interpret and act upon their output. 

It may on occasion be convenient for a government official or an executive to blame some “rogue” algorithm for their mistakes. But we should not be fooled by this rhetoric. We should hold those who deploy AI systems legally and morally accountable for the outcomes they produce. Artificial intelligence is no more than a technological tool, like any other. It is a powerful general purpose technology, akin to electricity, that enables other technologies to work more effectively. But it is not a property in its own right and has no agency. AI would sound a lot less frightening if we were to relabel it as computational statistics. 

That said, companies, researchers and regulators should pay particular attention to the feedstock used in these AI systems: data. Researchers have shown that partial data sets used to power modern AI systems can bake in societal inequities and racial and sexual prejudices. This issue has been highlighted at Google following the departure of Timnit Gebru, an ethical researcher, who claimed she was dismissed after warning of the dangers of large-scale language generation systems that rely on historic data taken from the internet.

Tuesday, December 22, 2020

Aren't Cybercommand And The Einstein Crew At DHS Culpable For The Solarwinds Fail?

politico  |  Pentagon officials are making an 11th-hour push to potentially break up the joint leadership of U.S. Cyber Command and the National Security Agency, a move that would raise inevitable questions about Army Gen. Paul Nakasone's future as head of the country’s largest spy agency.

Five people familiar with the matter told POLITICO that senior Defense Department leaders are reviewing a plan to separate the two agencies, a move lawmakers and DoD had contemplated for years but had largely fallen by the wayside since Nakasone assumed command of both organizations in 2018. The Wall Street Journal reported that a meeting about the proposal is scheduled for this week. Defense One first reported the effort was afoot.

If successful, the move could create major upheaval just as national security officials try to determine the full scope of a monthslong hack of several major U.S. agencies — including Homeland Security Department and the nuclear weapons branch of the Energy Department — by Russia’s elite spy agency.

Trump “talking about trying to split up the cyber command from the national security agency, in the midst of a crisis to be talking about that type of disruption makes us vulnerable again,” House Armed Services Chair Adam Smith (D-Wash.) said Saturday night during an interview with CNN.

On Friday, Smith sent letters to acting Defense Secretary Christopher Miller and the chairman of the Joint Chiefs of Staff, Gen. Mark Milley, warning them against severing the leadership of NSA and Cyber Command. The two agencies have shared leadership under a so-called dual-hat arrangement since the Pentagon stood up Cyber Command in 2009.

Nakasone has led the military’s top digital warfighting unit and the federal government’s largest intelligence agency for roughly two and a half years. He has re-imagined how both organizations can deploy their own hackers and analysts against foreign adversaries via a doctrine of “persistent engagement” — putting U.S. forces in constant contact against adversaries in cyberspace, including tracking them and taking offensive action.

The four-star is beloved by both Democrats and Republicans, especially after defending the 2018 and 2020 election from foreign interference. Some lawmakers even joke they wish they could put Nakasone in charge of more parts of the federal government.

 

Sunday, December 20, 2020

Call It What You Like - But Complete Access To Digital DNA Has No Precedent

 wired |  In terms of the SolarWinds incident, the deterrence game is not yet over. The breach is still ongoing, and the ultimate end game is still unknown. Information gleaned from the breach could be used for other detrimental foreign policy objectives outside of cyberspace, or the threat actor could exploit its access to US government networks to engage in follow-on disruptive or destructive actions (in other words, conduct a cyberattack).

But what about the Department of Defense’s new defend forward strategy, which was meant to fill in the gap where traditional deterrence mechanisms might not work? Some view this latest incident as a defend-forward failure because the Defense Department seemingly did not manage to stop this hack before it occurred. Introduced in the 2018 Defense Department Cyber Strategy, this strategy aims to “disrupt or halt malicious cyber activity at its source.” This represented a change in how the Defense Department conceptualized operating in cyberspace, going beyond maneuvering in networks it owns, to operating in those that others may control. There has been some controversy about this posture. In part, this may be because defend forward has been described in many different ways, making it hard to understand what the concept actually means and the conditions under which it is meant to apply.

Here’s our take on defend forward, which we see as two types of activities: The first is information gathering and sharing with allies, partner agencies, and critical infrastructure by maneuvering in networks where adversaries operate. These activities create more robust defense mechanisms, but largely leave the adversary alone. The second includes countering adversary offensive cyber capabilities and infrastructure within the adversaries’ own networks. In other words, launching cyberattacks against adversary hacking groups—like threat actors associated with the Russian government. It isn’t clear how much of this second category the Defense Department has been doing, but the SolarWinds incident suggests the US could be doing more.

How should the US cyber strategy adapt after SolarWinds? Deterrence may be an ineffective strategy for preventing espionage, but other options remain. To decrease the scope and severity of these intelligence breaches, the US must improve its defenses, conduct counterintelligence operations, and also conduct counter-cyber operations to degrade the capabilities and infrastructure that enable adversaries to conduct espionage. That’s where defend forward could be used more effectively.

This doesn’t mean deterrence is completely dead. Instead, the US should continue to build and rely on strategic deterrence to convince states not to weaponize the cyber intelligence they collect.

Monday, December 07, 2020

The Incredible Difficulty Of Writing Chinese Characters On A Computer

happyscribe |  Listener supported WNYC Studios. Wait, you're OK? You're listening to Radiolab Radio from WNYC. Hey, I'm Jad Abumrad

[00:29]
This is Radiolab to start things off today.

[00:32]
A couple months ago, we also got to a small community in America in that magical, forgotten time before the coronavirus, our reporter Simon Adler somewhat mysteriously walked me a few blocks from our office making hand to a coffee shop.

[00:49]
OK, with our coffee purchased. Let's go stand in the corner where it's maybe a little less loud. Sort of a fancy one. Exposed brick bear Eddison bulbs.

[00:57]
So let let's gaze out upon the hipsters of Lower Manhattan in the survey and count the number of laptops. Yeah. So how many laptops do you think are here. I get a kick starting from the left. We're going to circle around. We got one, two, three, four, five, six, two more on the four more on the bar.

[01:16]
And they're all typing the same way. Right. Or they're all using a quirky keyboard.

[01:21]
Yeah. Yes.

[01:22]
And the reason he dragged me there as I now know it now let's imagine we're in Shenzhen in a Chinese Starbucks was to point out a massive cultural difference hidden in plain sight and to propose a bit of a reporting trip.

[01:36]
Are you going to send somebody to to Starbucks in Shenzhen?

[01:39]
Well, that's my hope, that I will be the one sent to a Starbucks in Shenzhen, Wellfleet, Adler.

[01:46]
Now, you did not bite on that reporting trip. No. Plus, pretty soon thereafter, traveling to China became a lot more difficult.

Wednesday, May 27, 2020

Different Than Penrose and Weinstein: Wolfram REALLY Mesmerized By Rule 30


dr.brian.keating |  On the philosophical front, we compared Godel to Popper and discussed computational irreducibly which arose from Stephen’s interest in Godel and Alan Turing’s work.

“Actually, there’s even more than that. If the microscopic updatings of the underlying network end up being random enough, then it turns out that if the network succeeds in corresponding in the limit to a finite dimensional space, then this space must satisfy Einstein’s Equations of General Relativity. It’s again a little like what happens with fluids. If the microscopic interactions between molecules are random enough, but satisfy number and momentum conservation, then it follows that the overall continuum fluid must satisfy the standard Navier–Stokes equations. But now we’re deriving something like that for the universe: we’re saying that these networks with almost nothing “built in” somehow generate behavior that corresponds to gravitation in physics. This is all spelled out in the NEW KIND OF SCIENCE book. And many physicists have certainly read that part of the book. But somehow every time I actually describe this (as I did a few days ago), there’s a certain amazement. Special and General Relativity are things that physicists normally assume are built into theories right from the beginning, almost as axioms (or at least, in the case of string theory, as consistency conditions). The idea that they could emerge from something more fundamental is pretty alien. The alien feeling doesn’t stop there. Another thing that seems alien is the idea that our whole universe and its complete history could be generated just by starting with some particular small network, then applying definite rules. For the past 75+ years, quantum mechanics has been the pride of physics, and it seems to suggest that this kind of deterministic thinking just can’t be correct. It’s a slightly long story (often still misunderstood by physicists), but between the arbitrariness of updating orders that produce a given causal network, and the fact that in a network one doesn’t just have something like local 3D space, it looks as if one automatically starts to get a lot of the core phenomena of quantum mechanics — even from what’s in effect a deterministic underlying model. OK, but what is the rule for our universe? I don’t know yet. Searching for it isn’t easy. One tries a sequence of different possibilities. Then one runs each one. Then the question is: has one found our universe?”
My question: that was then, what do you think now?
On the implications of finding a simple rule that matches existing laws of physics:
I certainly think it’ll be an interesting — almost metaphysical — moment if we finally have a simple rule which we can tell is our universe. And we’ll be able to know that our particular universe is number such-and-such in the enumeration of all possible universes. It’s a sort of Copernican moment: we’ll get to know just how special or not our universe is. Something I wonder is just how to think about whatever the answer turns out to be. It somehow reminds me of situations from earlier in the history of science. Newton figured out about motion of the planets, but couldn’t imagine anything but a supernatural being first setting them in motion. Darwin figured out about biological evolution, but couldn’t imagine how the first living cell came to be. We may have the rule for the universe, but it’s something quite different to understand why it’s that rule and not another. Universe hunting is a very technology-intensive business. Over the years, I’ve gradually been building up the technology I think is needed — and quite a bit of it is showing up in strange corners of Mathematica. But I think it’s going to be a while longer before there are more results. And before we can put “Our Universe” as a Demonstration in the Wolfram Demonstrations Project. And before we can take our new ParticleData computable data collection and derive every number in it. But universe hunting is a good hobby.”

It’s awfully easy to fall into implicitly assuming a lot of human context. Pioneer 10 — the human artifact that’s gone further into interstellar space than any other (currently about 11 billion miles, which is about 0.05% of the distance to α Centauri) — provides one of my favorite examples. There’s a plaque on that spacecraft that includes a representation of the wavelength of the 21-centimeter spectral line of hydrogen. Now the most obvious way to represent that would probably just be a line 21 cm long. But back in 1972 Carl Sagan and others decided to do something “more scientific”, and instead made a schematic diagram of the quantum mechanical process leading to the spectral line. The problem is that this diagram relies on conventions from human textbooks — like using arrows to represent quantum spins — that really have nothing to do with the underlying concepts and are incredibly specific to the details of how science happened to develop for us humans.”

From the audience he responded to some questions including “what does he believe a scientific theory should be?” and “Does mathematical beauty matter at all, or is it just falsifiability?”
The Story of Rule 30
How can something that simple produce something that complex? It’s been nearly 40 years since I first saw rule 30 — but it still amazes me. Long ago it became my personal all-time favorite science discovery, and over the years it’s changed my whole worldview and led me to all sorts of science, technology, philosophy and more.


A Class of Models with the Potential to Represent Fundamental Physics



arvix |  Stephen Wolfram A class of models intended to be as minimal and structureless as possible is introduced. Even in cases with simple rules, rich and complex behavior is found to emerge, and striking correspondences to some important core known features of fundamental physics are seen, suggesting the possibility that the models may provide a new approach to finding a fundamental theory of physics.
Subjects: Discrete Mathematics (cs.DM); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph)
Cite as: arXiv:2004.08210 [cs.DM]
(or arXiv:2004.08210v1 [cs.DM] for this version)

Bibliographic data

Submission history

From: Stephen Wolfram [view email]
[v1] Wed, 15 Apr 2020 16:23:43 UTC (108,032 KB)

Sunday, May 06, 2018

Weaponized Autism: Fin d'Siecle Programmer's Stone


melmagazine |  We know that people on the spectrum can exhibit remarkable mental gifts in addition to their difficulties; Asperger syndrome has been associated with superior IQs that reach up to the “genius” threshold (4chan trolls use “aspie” and “autist” interchangeably). In practice, weaponized autism is best understood as a perversion of these hidden advantages. Think, for example, of the keen pattern recognition that underlies musical talent repurposed for doxxing efforts: Among the more “successful” deployments of weaponized autism, in the alt-right’s view, was a collective attempt to identify an antifa demonstrator who assaulted several of their own with a bike lock at a Berkeley rally this past April.

As Berkeleyside reported, “the amateur detectives” of 4chan’s /pol/ board went about “matching up his perceived height and hairline with photos of people at a previous rally and on social media,” ultimately claiming that Eric Clanton, a former professor at Diablo Valley College, was the assailant in question. Arrested and charged in May, Clanton faces a preliminary hearing this week, and has condemned the Berkeley PD for relying on the conjecture of random assholes. “My case threatens to set a new standard in which rightwing extremists can select targets for repression and have police enthusiastically and forcefully pursue them,” he wrote in a statement.

The denizens of /pol/, meanwhile, are terribly proud of their work, and fellow Trump boosters have used their platforms to applaud it. Conspiracy theorist Jack Posobiec called it a new form of “facial recognition,” as if it were in any way forensic, and lent credence to another dubious victory for the forces of weaponized autism: supposed coordination with the Russian government to take out ISIS camps in Syria. 4chan users are now routinely deconstructing raw videos of terrorist training sites and the like to make estimations about where they are, then sending those findings to the Russian Ministry of Defense’s Twitter account. There is zero reason to believe, as Posobiec and others contend, that 4chan has ever “called in an airstrike,” nor that Russia even bothered to look at the meager “intel” offered, yet the aggrandizing myth persists.

Since “autistic” has become a catchall idiom on 4chan, the self-defined mentality of anyone willing to spend time reading and contributing to the site, it’s impossible to know how many users are diagnosed with the condition, or could be, or earnestly believe that it correlates to their own experience, regardless of professional medical opinion. They tend to assume, at any rate, that autistic personalities are readily drawn to the board as introverted, societal misfits in search of connection. The badge of “autist” conveys the dueling attitudes of pride and loathing at work in troll communities: They may be considered and sometimes feel like failures offline — stereotyped as sexless, jobless and immature — but this is because they are different, transgressive, in a sense better, elevated from the realm of polite, neurotypical normies. Their handicap is a virtue.

Saturday, April 28, 2018

Silly Peasants, Open Facebook Got NOTHING On Open "Consumer" DNA...,



NYTimes |  The California police had the Golden State Killer’s DNA and recently found an unusually well-preserved sample from one of the crime scenes. The problem was finding a match.

But these days DNA is stored in many places, and a near-match ultimately was found in a genealogy website beloved by hobbyists called GEDmatch, created by two volunteers in 2011.

Anyone can set up a free profile on GEDmatch. Many customers upload to the site DNA profiles they have already generated on larger commercial sites like 23andMe.

The detectives in the Golden State Killer case uploaded the suspect’s DNA sample. But they would have had to check a box online certifying that the DNA was their own or belonged to someone for whom they were legal guardians, or that they had “obtained authorization” to upload the sample.

“The purpose was to make these connections and to find these relatives,” said Blaine Bettinger, a lawyer affiliated with GEDmatch. “It was not intended to be used by law enforcement to identify suspects of crimes.”

But joining for that purpose does not technically violate site policy, he added.

Erin Murphy, a law professor at New York University and expert on DNA searches, said that using a fake identity might raise questions about the legality of the evidence.

The matches found in GEDmatch were to relatives of the suspect, not the suspect himself.

Since the site provides family trees, detectives also were able to look for relatives who might not have uploaded genetic data to the site themselves. 

Friday, April 13, 2018

Blockchain Is Not Only Crappy NSA Technology...,


medium |  Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction.

This December I wrote a widely-circulated article on the inapplicability of blockchain to any actual problem. People objected mostly not to the technology argument, but rather hoped that decentralization could produce integrity.

Let’s start with this: Venmo is a free service to transfer dollars, and bitcoin transfers are not free. Yet after I wrote an article last December saying bitcoin had no use, someone responded that Venmo and Paypal are raking in consumers’ money and people should switch to bitcoin.

What a surreal contrast between blockchain’s non-usefulness/non-adoption and the conviction of its believers! It’s so entirely evident that this person didn’t become a bitcoin enthusiast because they were looking for a convenient, free way to transfer money from one person to another and discovered bitcoin. In fact, I would assert that there is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast.
There is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast.
The number of retailers accepting cryptocurrency as a form of payment is declining, and its biggest corporate boosters like IBM, NASDAQ, Fidelity, Swift and Walmart have gone long on press but short on actual rollout. Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right: the company Ripple decided the best way to move money across international borders was to not use Ripples.

A blockchain is a literal technology, not a metaphor

Why all the enthusiasm for something so useless in practice?

People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions.

themaven |  I completely agree with much of what you wrote here. I’d like to point out a couple things:

First, in regards to “There is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast.” There is in fact at least one such person: me. In 2010 I was looking for a payment system which did not have any possibility for chargebacks. It turns out that bitcoin is GREAT for that, and I became a blockchain enthusiast as a result.

The ugly truth about blockchain is that it is immensely useful, but only when you are in some way trying to circumvent an authority of some sort. In my case, I wanted to take payments for digital goods without losing any to chargebacks. It’s also great for sending money to Venezuela (circumventing the authority of the government of Venezuela, which would really rather you not). It’s great for raising money for projects (ICOs are really about circumventing various regulatory authorities who make that difficult). It’s great for buying drugs, taking payment for ransomware, and any number of terrible illegal things related to human trafficking, money laundering, etc.

Frankly, the day that significant trading of derivatives (gold futures, oil futures, options, etc) starts happening on blockchain, I expect a bubble that will make previous crypto bubbles look tiny in comparison. This is not because blockchain is an easier way to trade these contracts! It is because some percentage of rich traders would like to do anonymous trading and avoid pesky laws about paying taxes on trading profits and not doing insider trading.

I sum it up like this: are you trying to do something with money that requires avoiding an authority somewhere? If not, there is a better technical solution than blockchain. That does NOT mean that what you are doing is illegal for you (it’s perfectly legal for me to send money to Venezuela). It just means that some authority somewhere doesn’t like what you are doing.

Blockchain is inherently in opposition to governmental control of the world of finance. The only reason governments aren’t more antagonistic towards blockchain is that they don’t truly understand how dangerous it is. I wrote at length about this back in 2013 in an article called “Bitcoin’s Dystopian Future”:

Thursday, April 12, 2018

You Don't Own and Cannot Access or Control Facebook's Data About You



theatlantic |  But the raw data that Facebook uses to create user-interest inferences is not available to users. It’s data about them, but it’s not their data. One European Facebook user has been petitioning to see this data—and Facebook acknowledged that it exists—but so far, has been unable to obtain it.

When he responded to Kennedy, Zuckerberg did not acknowledge any of this, but he did admit that Facebook has other types of data that it uses to increase the efficiency of its ads. He said:
My understanding is that the targeting options that are available for advertisers are generally things that are based on what people share. Now once an advertiser chooses how they want to target something, Facebook also does its own work to help rank and determine which ads are going to be interesting to which people. So we may use metadata or other behaviors of what you’ve shown that you’re interested in News Feed or other places in order to make our systems more relevant to you, but that’s a little bit different from giving that as an option to an advertiser.
Kennedy responded: “I don’t understand how users then own that data.” This apparent contradiction relies on the company’s distinction between the content someone has intentionally shared—which Facebook mines for valuable targeting information—and the data that Facebook quietly collects around the web, gathers from physical locations, and infers about users based on people who have a similar digital profile. As the journalist Rob Horning put it, that second set of data is something of a “product” that Facebook makes, a “synthetic” mix of actual data gathered, data purchased from outsiders, and data inferred by machine intelligence.

With Facebook, the concept of owning your data begins to verge on meaningless if it doesn’t include that second, more holistic concept: not just the data users create and upload explicitly, but all the other information that has become attached to their profiles by other means.

But one can see, from Facebook’s perspective, how complicated that would be. Their techniques for placing users into particular buckets or assigning them certain targeting parameters are literally the basis for the company’s valuation. In a less techno-pessimistic time, Zuckerberg described people’s data in completely different terms. In October 2013, he told investors that this data helps Facebook “build the clearest models of everything there is to know in the world.”

Facebook puts out a series of interests for users to peruse or turn off, but it keeps the models to itself. The models make Facebook ads work well, and that means it helps small and medium-size businesses compete more effectively with megacorporations on this one particular score. Yet they introduce new asymmetries into the world. Gullible people can be targeted over and over with ads for businesses that stop just short of scams. People prone to believing hoaxes and conspiracies can be hit with ads that reinforce their most corrosive beliefs. Politicians can use blizzards of ads to precisely target different voter types.

As with all advertising, one has to ask: When does persuasion become manipulation or coercion? If Facebook advertisers crossed that line, would the company even know it? Dozens of times throughout the proceedings, Zuckerberg testified that he wasn’t sure about the specifics of his own service. It seemed preposterous, but with billions of users and millions of advertisers, who exactly could know what was happening?

Most of the ways that people think they protect their privacy can’t account for this new and more complex reality, which Kennedy recognized in his closing remark.

“You focus a lot of your testimony ... on the individual privacy aspects of this, but we haven’t talked about the societal implications of it ... The underlying issue here is that your platform has become a mix of ... news, entertainment, and social media that is up for manipulation,” he said. “The changes to individual privacy don’t seem to be sufficient to address that underlying issue.”

Thursday, March 29, 2018

Mercer/Thiel vs Kochtopus? Finance/Geopolitics/Data Science/Livestock Management


GregPalast |  There are two dangers in the media howl over Trump’s computer gurus Cambridge Analytica, the data-driven psy-ops company founded by billionaire brown-shirts, the Mercer Family.
The story is that Cambridge Analytica, once directed by Steve Bannon, by shoplifting Facebook profiles to bend your brain, is some unique "bad apple" of the cyber world.

That's a dangerously narrow view. In fact, the dark art of dynamic psychometric manipulation in politics was not pioneered by Cambridge Analytica for Trump, but by i360 Themis, the operation founded by… no points for guessing… the Brothers Koch.

Mark Swedlund, himself an expert in these tools, explained in the film The Best Democracy Money Can Buy, that i360 dynamically tracks you on 1800 behaviors, or as Swedlund graphically puts it [see clip above],
"They know the last time you downloaded porn and
whether you ordered Chinese food before you voted."
Swedlund adds his expert conclusion: "I think that’s creepy."

The Koch operation and its competitor, DataTrust, use your credit card purchases, cable TV choices and other personal info — which is far more revealing about your inner life than the BS you put on your Facebook profile. Don’t trust DataTrust: This cyber-monster is operated by Karl Rove, "Bush’s Brain," who is principally funded by Paul Singer, the far Right financier better known as The Vulture.
Way too much is made of the importance of Cambridge Analytica stealing data through a phony app. If you’ve ever filled out an online survey, Swedlund told me, they’ve got you — legally.

The second danger is to forget that the GOP has been using computer power to erase the voting rights of Black and Hispanic voters for years — by "caging," "Crosscheck," citizenship challenges based on last name (Garcia? Not American!!), the list goes on — a far more effective use of cyberpower than manipulating your behavior through Facebook ads.

Just last week, Kris Kobach, Secretary of State of Kansas and Trump's chief voting law advisor, defended his method of hunting alleged "aliens" on voter rolls against a legal challenge by the ALCU. Kobach's expert, Jessie Richman, uses a computer algorithm that can locate "foreign" names on voter rolls. He identified, for example, one "Carlos Murguia" as a potential alien voter. Murguia is a Kansas-born judge who presides in a nearby courtroom.

It would be a joke, except that Kobach's "alien" hunt has blocked one in seven new (i.e. young) voters from registering in the state. If Kobach wins, it will, like his Crosscheck purge program and voter ID laws, almost certainly spread to other GOP controlled states. This could ultimately block one million new voters, exactly what Trump had in mind by pushing the alien-voter hysteria.

Wednesday, March 28, 2018

Have 99.999% Missed The Real Revolutionary Possibilities of Crypto?


hackernoon |  Money is power.

Nobody knew this better than the kings of the ancient world. That’s why they gave themselves an absolute monopoly on minting moolah.

They turned shiny metal into coins, paid their soldiers and their soldiers bought things at local stores. 

The king then sent their soldiers to the merchants with a simple message:

“Pay your taxes in this coin or we’ll kill you.”

That’s almost the entire history of money in one paragraph. Coercion and control of the supply with violence, aka the “violence hack.” The one hack to rule them all.

When power passed from monarchs to nation-states, distributing power from one strongman to a small group of strongmen, the power to print money passed to the state. Anyone who tried to create their own money got crushed.

The reason is simple:

Centralized enemies are easy to destroy with a “decapitation attack.” Cut off the head of the snake and that’s the end of anyone who would dare challenge the power of the state and its divine right to create coins.

Kings and nation states know the real golden rule: Control the money and you control the world.

And so it’s gone for thousands and thousands of years. The very first emperor of China, Qin Shi Huang (260–210 BC), abolished all other forms of local currency and introduced a uniform copper coin. That’s been the blueprint ever since. Eradicate alternative coins, create one coin to rule them all and use brutality and blood to keep that power at all costs.

In the end, every system is vulnerable to violence.

Well, almost every one.


Tuesday, March 27, 2018

Governance Threat Is Not Russians, Cambridge Analytica, Etc, But Surveillance Capitalism Itself...,


newstatesman |  It’s been said in some more breathless quarters of the internet that this is the “data breach” that could have “caused Brexit”. Given it was a US-focused bit of harvesting, that would be the most astonishing piece of political advertising success in history – especially as among the big players in the political and broader online advertising world, Cambridge Analytica are not well regarded: some of the people who are best at this regard them as little more than “snake oil salesmen”. 

One of the key things this kind of data would be useful for – and what the original academic study it came from looked into – is finding what Facebook Likes correlate with personality traits, or other Facebook likes. 

The dream scenario for this would be to find that every woman in your sample who liked “The Republican Party” also liked “Chick-Fil-A”, “Taylor Swift” and “Nascar racing”. That way, you could target ads at people who liked the latter three – but not the former – knowing you had a good chance of reaching people likely to appreciate the message you’ve got. This is a pretty widely used, but crude, bit of Facebook advertising. 

When people talk about it being possible Cambridge Analytica used this information to build algorithms which could still be useful after all the original data was deleted, this is what they’re talking about – and that’s possible, but missing a much, much bigger bit of the picture.

So, everything’s OK then?

No. Look at it this way: the data we’re all getting excited about here is a sample of public profile information from 50 million users, harvested from 270,000 people. 

Facebook itself, daily, has access to all of that public information, and much more, from a sample of two billion people – a sample around 7,000 times larger than the Cambridge Analytica one, and one much deeper and richer thanks to its real-time updating status. 

If Facebook wants to offer sales based on correlations – for advertisers looking for an audience open to their message, its data would be infinitely more powerful and useful than a small (in big data terms) four-year-out-of-date bit of Cambridge Analytica data. 

Facebook aren’t anywhere near alone in this world: every day your personal information is bought and sold, bundled and retraded. You won’t know the name of the brands, but the actual giants in this company don’t deal in the tens of millions with data, they deal with hundreds of millions, or even billions of records – one advert I saw today referred to a company which claimed real-world identification of 340 million people. 

This is how lots of real advertising targeting works: people can buy up databases of thousands or millions of users, from all sorts of sources, and turn them into the ultimate custom audience – match the IDs of these people and show them this advert. Or they can do the tricks Cambridge Analytica did, but refined and with much more data behind them (there’s never been much evidence Cambridge Analytica’s model worked very well, despite their sales pitch boasts). 

The media has a model when reporting on “hacks” or on “breaches” – and on reporting on when companies in the spotlight have given evidence to public authorities, and most places have been following those well-trod routes. 

But doing so is like doing forensics on the burning of a twig, in the middle of a raging forest fire. You might get some answers – but they’ll do you no good. We need to think bigger. 

Thursday, February 01, 2018

MIT Intelligence Quest


IQ.MIT |  We are setting out to answer two big questions: How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?

Drawing on MIT’s deep strengths and signature values, culture, and history, MIT IQ promises to make important contributions to understanding the nature of intelligence, and to harnessing it to make a better world.

This is our quest.
Sixty years ago, at MIT and elsewhere, big minds lit the fuse on a big question: What is intelligence, and how does it work? The result was an explosion of new fields — artificial intelligence, cognitive science, neuroscience, linguistics, and more. They all took off at MIT and have produced remarkable offshoots, from computational neuroscience, to neural nets, to empathetic robots.

And today, by tapping the united strength of these and other interlocking fields and capitalizing on what they can teach each other, we seek to answer the deepest questions about intelligence — and to deliver transformative new gifts for humankind.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

Along with developing and advancing the technologies of intelligence, MIT IQ researchers will also investigate the societal and ethical implications of advanced analytical and predictive tools. There are already active projects and groups at the Institute investigating autonomous systems, media and information quality, labor markets and the work of the future, innovation and the digital economy, and the role of AI in the legal system.

In all its activities, MIT IQ is intended to take advantage of — and strengthen — the Institute’s culture of collaboration. MIT IQ will connect and amplify existing excellence across labs and centers already engaged in intelligence research.

Join our quest.

Still Not Decoded...,


Smithsonian | The Voynich Manuscript has baffled cryptographers ever since the early 15th-century document was rediscovered by a Polish book dealer in 1912. The handwritten, 240-page screed, now housed in Yale University’s Beinecke Rare Book & Manuscript Library, is written from left to right in an unknown language. On top of that, the text itself is likely to have been scrambled by an unknown code. Despite numerous attempts to crack the code by some of the world’s best cryptographers, including Alan Turing and the Bletchley Park team, the contents of the enigmatic book have long remained a mystery. But that hasn’t stopped people from trying. The latest to give it a stab? The Artificial Intelligence Lab at the University of Alberta.

Bob Weber at the Canadian Press reports that natural language processing expert Greg Kondrak and grad student Bradley Hauer have attempted to identify the language the manuscript was written in using AI. According to a press release, the team originally believed that the manuscript was written in Arabic. But after feeding it to an AI trained to recognize 380 languages with 97 percent accuracy, its analysis of the letter frequency suggested the text was likely written in Hebrew. 

“That was surprising,” Kondrak says. They then hypothesized that the words were alphagrams, in which the letters are shuffled and vowels are dropped. When they unscrambled the first line of text using that method they found that 80 percent of the words created were found in the Hebrew dictionary. The research appears in the journal Transactions of the Association of Computational Linguistics.

Neither of the researchers are schooled in ancient Hebrew, so George Dvorsky at Gizmodo reports they took their deciphered first line to computer scientist Moshe Koppel, a colleague and native Hebrew speaker. He said it didn’t form a coherent sentence. After the team fixed some funky spelling errors and ran it through Google Translate, they came up with something readable, even if it doesn’t make much sense: “She made recommendations to the priest, man of the house and me and people.”

Master Arbitrageur Nancy Pelosi Is At It Again....,

🇺🇸TUCKER: HOW DID NANCY PELOSI GET SO RICH? Tucker: "I have no clue at all how Nancy Pelosi is just so rich or how her stock picks ar...