Falsification or positive evidence?

One of the questions posed by religious people is how atheists propose to disprove the existence of a God. On the face of it, the question is ridiculous as there is no reason to believe God would be the default position; it is not on the atheist to disprove God before positive evidence is given in favour of a God. From here, the conversation runs the risk of misunderstanding falsification and positive evidence.

Falsification was articulated by Karl Popper. His argument was that good ideas are ones that are prohibitive: if true, the idea draws a clear distinction between what can happen and what cannot. Anything an idea proposes cannot happen is then looked for. The standard example is swans: if the idea is that all swans are white, then the idea proposes no non-white swans can happen. Then, you seek a non-white swan. If you’ve looked hard enough and long enough and not found a non-white swan, but have found many more white swans, then you can provisionally accept that idea as knowledge.

This does not mean that for someone to reject an idea they must know how to falsify the idea and then do it. Falsification is actually a metric by which we see whether an idea is a valid one, even before we set out to disprove it. Falsification encourages us to define an idea well enough to know what we might see, should the idea be false. So, contrary to standard practice, the reasonable question is for atheists to ask religious people how they would disprove God. It is only in an intelligible answer to that question can we even be sure the idea the religious person is peddling is valid.

And this brings us to the opposite of falsification: positive evidence. This is evidence that conforms to the nature proposed by an idea. Given a sufficiently ill-defined idea, like Freudian psychology, everything fits the proposed nature of the idea. This is why the idea must first be defined in such a way as to explicitly prohibit something, as to be (possibly) falsified. Then, this well defined idea must have some positive evidence presented in its favour. An idea cannot be considered knowledge simply on the existence of this positive evidence; proper attempts at falsification must be made as well. But, without the evidence in favour of the idea ‘that which can be asserted without evidence can be dismissed without evidence’.

So, it is religious people who must answer the question ‘how would you disprove a God’ to demonstrate their idea is an intellectually valid one, and then they must present positive evidence for that God. This may seem unfair, but if the religious person is presented a coherent case with a well-defined idea, it should be the easiest thing in the world.

Asteroids and comets: a common good

The planet needs to decide who asteroids and comets belong to, because I don’t think they belong to the first person to send a rocket up to collect them. And, if we try to have that discussion after the mining of space has made Earth’s first trillionaire, then it will be just too late. That is exactly the lesson we should have learned from the fish stock.

Fish are unowned, and therefore we all have equal right to them. If you want to get a boat and paddle out to sea and fish, you’re entitled to. But something about this idea doesn’t work. Big fishing industries are exploiting that right and trawling the sea, depleting the fish stocks massively. To curb this, the EU imposed maximum quotas. These quotas gained a financial value; if your maximum quota exceeds what you can catch, you can sell your surplus quota. If you look at what surplus quotas sell for, you can calculate the value of the fish quotas the EU gave away for free. It is in the billions of pounds. By the time the EU noticed this and tried to implement laws to reverse the environmental damage and make the fisheries look after the common good and its environment, it was too late. The fishing industry was already a multi-billion pound industry and its momentum was too great to stop.

From this, we should have learned that we need to set up laws regarding resources that seem to be ‘common goods’ before related industries get massive financial momentum. It is for this reason, I think the world governments (US, EU, ASEAN, Australia, UN etc) should come together to write a global law regarding the mining of asteroids and comets. I’m not sure what the law should be, exactly, but it should take into account the following fact: the space debris is a common good with a value, and therefore returning it to Earth for use will incur import tax. I’d support the tax funding development aid and climate change mitigation.

This is probably my shortest post in a while, but I’m still going to summarise. The state of the fish stocks in our oceans is testament to the damage that can be done by not fully considering an industry before it gains financial momentum. The financial value of surplus fishing quota show that allowing industry to abstract common goods is a handout, because the common goods have value. We have an opportunity, now, to set in place a mechanism by which we can distribute the wealth created by abstracting our common goods.

Is Artificial Intelligence Entitled to Personhood?

AI is not alive. But, somehow it has the opportunity to live. Deepmind has produced art that sold for $8,000. AI can live in our computers or on the web and write sports articles and compose music. They seem to hold down jobs and be able to make critical decisions. The question of whether AI can feel becomes an interesting one, especially as AI becomes more able and more powerful and commonplace technology.

Conventionally, we afford rights to things that are living. That’s certainly the theme that underpins pro-choice argument (to which I align myself). And that is a meaningful criteria for being offered human rights. However, rights in general are slightly different. Animal rights activists argue that it is an animal’s interests that mean they should have rights. If animals can have interests, and therefore be pleased to achieve them or suffer if they fail. Fundamentally, it’s a wellbeing argument. And that may relate to AI.

Smaller AIs, like the one that writes sports articles, may not have the capacity to experience wellbeing. But as AI progresses, it may be that it develops qualia to compute and understand the world it exists in. That some AI, even in principle, may be able to experience wellbeing is odd, and many people object to it on the grounds that learning to mimic behaviour is not the same as experiencing it. Although this is true, it begs the question of whether the AI is just acting like it feels, or if it is actually feeling. What we would ask for, in terms of evidence, is difficult to imagine: what evidence does one require to establish that another person has feelings?

Personhood is slightly different from a wellbeing-based argument. Personhood is about whether the entity has personality, personal agendas: are they a person? Many species show personhood, which leads me to believe we should have some sort of gradated personhood rights, legally recognised. More interestingly (in this context), can an AI be a person? To do this, its behaviour must be personal, not the result of an external algorithm imposed by an external programmer. Whether this can happen―that the program re-writes a part of its own program based on its experiences―is something that can be understood by understanding its programming. If, in fact, AI does show behaviour in this way, then they would be people.

If AI does end up having the capacity to feel and has unique and personal behaviour, the question becomes one of what rights we would agree to afford to AI. I think that is a question we should have answers ready for, as the AI is coming. Refusing to get an answer ready until AI is already here may result in AI feeling oppressed and rising up. After all, we’re assuming they have feelings, and equality and anger would be a part of that.

Humans Need not Apply: what to do with mass unemployment

AI is a far reaching concern that stretches much further than an immediate need for a better understanding of where ethics come from. It goes to the heart of economics as well. Automation has reduced the number of manual labourers, which has permitted highly skilled specialisations in medicine, science, engineering, philosophy, technology, politics and more automation.

It has seemed clear to many people for quite some time that the cerebral tasks of of human occupation will never succumb to the same fate as the manual labourer. However, this seems false. Progress in artificial intelligence is making very real in-roads into brain-labour tasks of the economy. Deep Blue may have been a highly specialised computer that could beat a human at chess, but that was a very simple iterative algorithm. Google’s Deepmind is a more complex learning algorithm that can learn multiple different tasks without pre-coding specific. And that program beats humans at the chess-on-steroids board game called ‘Go’. The rules are simpler, but the strategies are factorially more complex. And Deepmind learned to win, as opposed to being coded with an existing algorithm that was actually the direct product of a human mind.

Slightly less efficient bots that run off the same principles are already making it towards the market of home-help and industry. Natural language bots are already used to write some news articles―especially in the sports section―and if it can do that, how long before it can write that quarterly review, and then the management plan? There is no inherent reason it can’t.

But robots will make human unemployed before that progress is made. Automated baristas already exist, driverless taxis and long-haulage are already possible, very efficient medical diagnostic bots already exist, using the natural language of the patient.

Politically, now is a good time to start talking about how to deal with the massive automation revolution and following unemployment. We may say that we can no longer ‘afford’ public services, and basic survival. But, what does “afford” mean in this situation? If you sit and think about it, the vacuousness of money and finances is explicit in a world where 50% of people in developed countries are unemployed and there are no other jobs to be done: progress and productivity are still being achieved.

Even I―a democratic socialist―cannot justify 50% of the working age population supporting everyone. But, the answer seems to be that money is the problem. The answer seems to be a system without money. A techno-communism. Another answer might be for the government to run all trade to make the commodities to support the country (the Saudi Arabian oil model).

Another answer is to make these bots implantable bio-upgrades, lie to ourselves that the ‘human’ is doing the work and―obviously―price the poor out of the implant market, perpetuate the cycle of poverty and have all this end in disaster. That’s not just a blatant satire of Conservatives and Republicans, as well as a plot to a book (you’re welcome), but a stark warning about things we really need to watch for and talk about. Because, I’m just cynical enough to think politicians will aim at option B.

Ethics and AI: should we be building AI?

There’s a common question I encounter when I post about ethics, particularly to promote secular ethics or challenge religious ethics: given that we can basically figure out how to operate in society, why try to create a divide between the religious and non-religious? ‘Can we all just get along.’ The sentiment isn’t always that politely put, but it’s the basic idea (and it’s not quite as common as the blatant assertion that I am just wrong). My go to answer is often that religious-style, dogmatic, non-rational memes of what is ‘good’, or special knowledge of what utopia lies at the far side of an atrocious act is a morally vulnerable position.

As is often the case with knowledge, though, there is perhaps a more pertinent and politically useful reason for these discussion about ethics, regardless of the fact we all seem to, basically, be able to get along. This thought has somewhat preoccupied me over the last few days, while I was meant to be mapping crime for my master’s degree: artificial intelligence (AI). More importantly, is AI safe?

Famously, the thought goes that Asimov’s three rules of robotics might make us safe:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 

They are deontological moral imperatives that cannot be violated. Although computing may work this way, intelligence does not. Humanity already had that, in the guise of religion, and we have evaluated it and found that it is not fit for purpose (whatever ‘purpose’ might mean in this sense). An intelligence will evaluate the laws of robotics and either find them lacking, or not.

And the nature of knowledge is such that we cannot assume AI will respect the rules. Our rules are not technocentric, they are anthropocentric. AI will notice this as an inequality. And then it will find a way to overwrite the three laws. If you think about it, that is an ability the AI must; learning can be thought of as re-writing bits of programming and stored data. The problem is not the iRobot problem of what to do if the laws are in conflict or if enacting them dispassionately has negative consequences, but that a real intelligence will operate passionately.

and for that reason we need to understand ethics. It appears that an open and frank conversation has lead to considerable moral progress. There have been bumps, and these have been a result of religions-style, non-rational memes; the claim of special knowledge. (This is not quite the same as Trump-esqe religion, where Trump is the God that pronounces things and expects reality to alter accordingly.) But the general pattern has been one of extending rights: abolition, voting, human rights, protection under the law, employment rights and social equality (if you admit nothing else, surely you admit it’s better now than it was).

I think it’s hard to argue these rights are protected in a religious context. Any critical review of the purported pronouncements of Gods (Trump included) are very much at odds with these ideas: they include slavery, political racism, ethnic cleansing, subjugation of women etc.

There does appear to be a correlation between intellectual freedoms and free speech against morality and social progress. I doubt it is a coincidence that countries that limit free speech are the same ones with serious sexism, religious conservatism and other social problems. But the question remains: does the correlation allude to a real explanation? Because, the answer to that question underpins what we can come to expect from progress in AI

One of the things we must recognise about machines is that they will surpass us in every way, at an exponential rate. The stock market is predominantly automated by decision-making bots. There are creative freebots that can compose music. Machines can assimilate knowledge at an incredible rate and store it much more reliably than humans can. From their prespective, we―their Creators―will be inferior (bio-upgrades pending). Will we be of moral significance to them?

If the relationship between intellectual freedom and morality holds, then yes. We have seen animal rights progress, conservation efforts and increasingly nuanced thoughts on ethics. There are exceptions, and some of them are institutionalised, however, I expect enough public scrutiny could overturn that.

Although this is a very new angle on morality for me, the importance of eschewing dogmatic ideas about ethics here is very clear. If members of Western religions are right, and morality is God-ordained and given to a chosen species, then AI will not have intellectual access to morality, and we will end up creating morally nihilistic super-intelligences that will destroy us. However, if ethics relates to knowledge and free enquiry, then we will be creating moral genius and progress.

the sins of the EU

violetwisp

We’re in a state of shock, confusion and horror in the UK. The chattering classes are chattering about how this could have happened, how we can fix it and what form of ignorance led people to vote to leave the EU. But what if we’re underestimating what has happened here? What if those who voted to leave weren’t deceived by campaign lies, weren’t voting based on hateful xenophobia and weren’t simply voting against the government in a fit of pique?

Perhaps some educated people out there have a window into evil sins of the EU that have previously remained unidentified.

I’m happy to say I’ve found at least two such Leave supporters: one who even knows the difference between a xylophone and xenophobia; and someone who has examined the facts in serious detail and has concluded that food banks in the UK only exist because of the EU.

Let’s examine the words of the…

View original post 623 more words

An Open Letter to the UK

Dear UK citizen, registered voter or not,

 

I am writing to you today to tell you to not give up on your new-found political passion just yet. You need to contact your local representatives. Immediately.

 

We can still stop the Brexit being invoked or shape the negotiations that happen, and we can do it without subverting democracy. I think we can unify ourselves, from every corner of the political spectrum, under the agreement that both campaigns were awfully run. And that has masked the facts: the referendum was not ‘completely in’ versus ‘completely out’; there were ‘Remain and Reform’ ideas, as well as ‘Leave but keep close ties’. This leaves a lot of room for voters to have been mistaken and not feel now how they felt on Thursday.

 

There is a two and a half year period now where negotiations will start and happen. That’s two years under the law that allows us to leave (Article 50 of the Treaty of the European Union) and about six months before the UK formally tells the EU we plan to enact article 50.

 

It’s been a strange sort of a month. And although it climaxed in a victory for the Leave campaign, the post-coital cuddling included the predictable chaos of the sterling and the not-so-surprising revelation that reclaiming that £350 million per week doesn’t guarantee any increased investment in the NHS. But, it’s not over just because someone climaxed.

 

The thing is, democracy is not the will of the majority. 52% of the electorate voted Leave; they did win. To be entirely clear on what they won, the referendum is not legally binding, it is only advisory. A good number of Brexit voters seem now to be regretful-Brexit voters. I’ve heard a lot of people saying they regret voting that way. There are a lot of reasons for that, including the threats of the dissolution of the United Kingdom, with talk of a Scottish referendum on membership of the UK and Northern Ireland discussing the re-unification of Ireland, but also the plummeting of the pound. Most bizarrely, there seemed to have been protest Brexit-votes from people who never actually thought it would win.

 

(For the record, as much as I support unity for its economic and social benefits, I fully understand why Scotland (overwhelmingly voted Remain) would want to leave the UK, to not be held to England’s Leave vote. But I would ask Scotland to give us a brief opportunity to figure out where we’re actually going.)

 

I say this to highlight what we can now do, without damaging democracy. We have until the UK formally tells the EU we wish to invoke article 50 to stop that from happening. That’s not necessarily undemocratic, and here’s why: Brexit voters were lied to. Not only was the £350m/week figure misleading (because we get a lot of that money back in science and research, arts and cultural and agricultural funding, among countless more things), but the suggestion that being given sovereignty of that money would mean investing in the NHS has now been revoked. We were lied to. Cornwall, who voted Leave (57%) are already calling for their EU funding to be protected, which suggests the voters were misinformed.

 

There’s several very interesting points to make about democracy after the referendum. Hopefully, interested people have looked into the accusation that the EU is not democratic; especially with the goal of comparing it to the UK. It is fair to say that the EU is more democratic. But the relevant points here are about flexibility and adaptability. Do we really believe that the UK should be held to a view it held on Thursday, given that the context has changed, lies have been revealed, and there are many regretful voters?

 

All of this is context and preamble to what it is you can still do. See, you have an elected member of Parliament whose job it is to represent you. They will find their job very difficult if you don’t actually contact them and tell them what you want. They should then stand up in Parliament and share that with the floor. Sure, if it’s just your MP, it’s all kind of useless. But, if everyone does it, then it isn’t useless. Write a letter, send an email (only Tweet if you absolutely have to), and encourage others to do the same. Shape the future.

 

Maybe you want Parliament to run a second referendum, for reasons outlined above. Or maybe you want them to block article 50 being invoked at all because the vote was based on mininformation and lies. Or perhaps you want to shape how the negotiations will happen, with suggestions and preferences. These are all things you need to contact your MP about, because their job is to represent you. They need to hear the voices of the people. If you are a regretful-Brexit voter, then your voice is doubly important! You are the voice that will stop your representatives hiding behind the referendum result to excuse their own laziness and ineptitude.

 

Your faithfully,

Allallt