Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, January 17, 2019

17/1/19: Why limits to AI are VUCA-rich and human-centric


Why ethics, and proper understanding of VUCA environments (environments characterized by volatility/risk, uncertainty, complexity and ambiguity) will matter more in the future than they matter even today? Because AI will require human control, and that control won't happen along programming skills axis, but will trace ethical and VUCA environments considerations.

Here's a neat intro: https://qz.com/1211313/artificial-intelligences-paper-clip-maximizer-metaphor-can-explain-humanitys-imminent-doom/. The examples are neat, but now consider one of them, touched in passim in the article: translation and interpretation. Near-perfect (native-level) language capabilities for AI are not only 'visible on the horizon', but are approaching us with a break-neck speed. Hardware - bio-tech link that can be embedded into our hearing and speech systems - is 'visible on the horizon'. With that, routine translation-requiring exchanges, such as basic meetings and discussions that do not involve complex, ambiguous and highly costly terms, are likely to be automated or outsourced to the AI. But there will remain the 'black swan' interactions - exchanges that involve huge costs of getting the meaning of the exchange exactly right, and also trace VUCA-type environment of the exchange (ambiguity and complexity are natural domains of semiotics). Here, human oversight over AI and even human displacement of AI will be required. And this oversight will not be based on technical / terminological skills of translators or interpreters, but on their ability to manage ambiguity and complexity. That, and ethics...

Another example is even closer to our times: AI-managed trading in financial assets.  In normal markets, when there is a clear, stable and historically anchored trend for asset prices, AI can't be beat in terms of efficiency of trades placements and execution. By removing / controlling for our human behavioral biases, AI can effectively avoid big risk spillovers across traders and investors sharing the same information in the markets (although, AI can also amplify some costly biases, such as herding). However, this advantage becomes turns a loss, when markets are trading in a VUCA environment. When ambiguity about investors sentiment and/or direction, or complexity of counterparties underlying a transaction, or uncertainty about price trends enters the decision-making equation, algorithmic trading platforms have three sets of problems they must confront simultaneously:

  1. How do we detect the need for, structure, price and execute a potential shift in investment strategy (for example, from optimizing yield to maximizing portfolio resilience)? 
  2. How do we use AI to identify the points for switching from consensus strategy to contrarian strategy, especially if algos are subject to herding risks?
  3. How do we migrate across unstable information sets (as information fades in and out of relevance or stability of core statistics is undermined)?

For a professional trader/investor, these are 'natural' spaces for decision making. They are also VUCA-rich environments. And they are environments in which errors carry significant costs. They can also be coincident with ethical considerations, especially for mandated investment undertakings, such as ESG funds. Like in the case of translation/interpretation, nuance can be more important than the core algorithm, and this is especially true when ambiguity and complexity rule.

Tuesday, October 16, 2018

16/10/18: Data analytics. It really is messier than you thought


An interesting study (H/T to @stephenkinsella) highlights the problems with empirical determinism that is the basis for our (human) evolving trust in 'Big Data' and 'analytics': the lack of determinism in statistics when it comes to social / business / finance etc data.

Here is the problem: researchers put together 29 independent teams, with 61 analysts. They gave these teams the same data set on football referees decisions to give red cards to players. They asked the teams to evaluate the same hypothesis: are football "referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players"?

Due to a variation of analytic models used, the estimated models produced a range of answers, from the effect of skin color of the player on red card issuance being 0.89 at the lower end or the range to 2.93 at the higher end. Median effect was 1.31. Per authors, "twenty teams (69%) found a statistically significant positive effect [meaning that they found the skin color having an effect on referees decisions], and 9 teams (31%) did not observe a significant relationship" [meaning, no effect of the players' skin color was found].

To eliminate the possibility that analysts’ prior beliefs could have influenced their findings, the researchers controlled for such beliefs. In the end, prior beliefs did not explain these differences in findings. Worse, "peer ratings of the quality of the analyses also did not account for the variability." Put differently, the vast difference in the results cannot be explained by quality of analysis or priors.

The authors conclude that even absent biases and personal prejudices of the researchers, "significant variation in the results of analyses of complex data may be difficult to avoid... Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results."

Good luck putting much trust into social data analytics.

Full paper is available here: http://journals.sagepub.com/doi/pdf/10.1177/2515245917747646.

Wednesday, May 23, 2018

23/5/18: American Exceptionalism, Liberty and... Amazon


"And the star-spangled banner in triumph shall wave
O'er the land of the free and the home of the brave!"

The premise of the American Exceptionalism rests on the hypothesis of the State based on the principles of liberty.

Enter Amazon, a corporation ever hungry for revenues, and the State, a corporation ever hungry for power and control. Per reports (https://www.aclunc.org/blog/amazon-teams-law-enforcement-deploy-dangerous-new-face-recognition-technology), Amazon "has developed a powerful and dangerous new facial recognition system and is actively helping governments deploy it. Amazon calls the service “Rekognition."

As ACLU notes (emphasis is mine): "Marketing materials and documents obtained by ACLU affiliates in three states reveal a product that can be readily used to violate civil liberties and civil rights. Powered by artificial intelligence, Rekognition can identify, track, and analyze people in real time and recognize up to 100 people in a single image. It can quickly scan information it collects against databases featuring tens of millions of faces, according to Amazon... Among other features, the company’s materials describe “person tracking” as an “easy and accurate” way to investigate and monitor people."

As I noted elsewhere on this blog, the real threat to the American liberal democracy comes not from external challenges, attacks and shocks, but from the internal erosion of the liberal democratic institutions, followed by the decline of public trust in and engagement with these institutions. The enemy of America is within, and companies like Amazon are facilitating the destruction of the American liberty, aiding and abetting the unscrupulous and power-hungry governments, local, state and beyond.


Wednesday, January 31, 2018

31/1/18: What Teachers of Piketty Miss on r vs g


A popular refrain in today’s political and socio-economic analysis has been the need for aggressive Government intervention (via taxation and regulation) to reverse growing wealth inequality. The narrative is supported by the increasing numbers of center and centre-left voters, and is firmly held in the key emerging demographic of the Millennial voters. The same narrative can also be traced to the emergence of some (not all) populist movements and political figures.

Yet, through regulatory restrictions, Governments in the recent past not only attempted to manage risks, but also created a system of superficial scarcity in supply of common goods & services (healthcare, education, housing etc) and skills, as well as access to professional services markets for practitioners. This scarcity de facto redistributes income (& thus, wealth) from the poor to the rich, from those not endowed with assets to those who inherit them or acquire them through other non-productive means, e.g. marriage, corruption, force. Many licensing requirements, touted by the Governments as the means for ensuring consumer protection, delivering social good, addressing markets failures and so on are either too cumbersome (creating a de facto bounds to supply) or outright skewed in favour off the incumbents (e.g. financial services licensing restrictions in trivial areas of sales and marketing). 

The re-distribution takes the form of high rents (paid for basic services that are woefully undersupplied: consider the California ‘water allocations’ and local authorities dumping federal subsidies to military personnel onto private sector renters, or consider the effect of pensions subsidies to police and other public services providers that are paid for by poorer taxpayers who themselves cannot afford a pension). 

This benevolent-malevolent counter-balancing in Government actions has fuelled wealth inequality, not reduced it, and the voters appear to be largely oblivious to this reality.

Crucially, the mechanism of this inequality expansion is not the simple r>g relationship between returns to capital (r) and the growth rate in the economy (g), but a more complex r(k)>r(hh)>r(g)>r(lh) relationship between returns to financial & restricted (r(k)), inc property & water rights in California, etc, high-quality human capital (r(hh)), inc returns to regulated (rationed) professions, the rate of growth in the economy (g), and the returns to low-quality human capital (r(lh)), inc returns to productive productive), but un-rationed professions. 

Why this is crucial? Because the r>g driven inequality, the type that was decried by Mr. Piketty and his supporters is missing a lot of what is happening in the labor markets and in large swathes of organisational structures, from limited partnerships to sole traders. Worse, lazy academia, across a range of second-tier institutions, has adopted Piketty’s narrative unchecked, teaching students the r vs g tale without considering the simple fact that neither r, nor g are well-defined in modern economics and require more nuanced insight. 

Yes, we now know that r>g, and by a fat margin (see https://www.frbsf.org/economic-research/files/wp2017-25.pdf). And, yes, that is a problem. But that is only one half off the problem, because it helps explain, in part, the 1% vs 99% wealth distribution imbalances. But it cannot explain the 10% vs 90% gap. Nor can it explain why we are witnessing the hollowing-out of the middle class, and the upper middle class. A more granular decomposition of r (and a more accurate measurement of g - another topic altogether) can help.

The non-corporate entities and high human capital individual earners can still benefit from the transfers from the poorer and the middle classes, but these benefits are not carried through traditional physical and financial capital returns or corporate rent seeking. (Do not take me wrong: these are also serious problems in the structure of the modern economy). 

Take for example, two professionals. Astrophysicist employed in a research lab and a general medical practitioner. The two possess asymmetric human capital: astrophysicist has more of it than a general medical doctor. Not only in duration of knowledge acquisition (quantity), but also in the degree of originality of knowledge (quality). But, one’s supply of competitors is rationed by the market (astrophysics high barrier to entry is… er… the need to acquire a lot of hard-to-earn human capital, with opportunity costs sky-high), another is rationed by the licensing and education systems. Guess which one earns more? And guess which one has access to transfers from the lower earners that can be, literally, linked to punitive bankruptcy costs? So how much of the earnings of the physician (especially the premium on astrophysicist’s wage) can be explained by a license to asymmetric information (extracting rents from patients) and by restrictions on entry into profession that go beyond assurance of quality? How much of these earnings are compensation for the absurdity of immense tuition bills collected by the medical schools with their own rent-seeking markets for professional education? And so on.

In a way, thus, the Governments have acted as agents for creating & sustaining wealth inequality, at the same time as they claimed to be the agents for alleviating it. 

Yes, consumer protecting regulation is necessary. No question. Yes, licensing is often necessary too (e.g. in the case of a physician as opposed to a physicist). But, no - transfers under Government regulations are not always linked to the delivery of real and tangible benefits of quality assurance. Take, for example, restrictive development practices and excessively costly planning bureaucracies in cities, like, San Francisco. While some regulation and some bureaucracy are necessary, a lot of it is pure transfer from renters and buyers to bureaucrats as well as investors. So, do a simple arithmetic exercise. Take $100 of income earned by a young professional. Roughly 33% of that goes in various taxes and indirect taxes to the Governments. Another 33% goes to to the landlord protected by these same Governments from paying the full cost of bankruptcy (limited liability) and from competition by restrictive new building and development rules. Another 15% goes to pay for various insurance products, again - regulated and/or required by the Governments - health, cars, renters’ etc. What’s left? Less than 20% of income puts gas into the car or pays for transportation, buys food and clothing. What exactly remains to invest in financial and real assets that earn the r(k) and alleviate wealth inequality? Nada. And if you have to pay for debt incurred in earning your r(hh) or even r(lh), you are… well… insolvent. Personal savings averaged close to 6.5-7% of disposable income in 2010-2014. Since then, these collapsed to 2.4% as of December 2017. Remember - the are percentages of the disposable income, not gross income. Is that enough to start investing in physical and/or financial capital? No. And the numbers quoted are averages, so \median savings are even lower than that.

Meanwhile, regulated auto loans debt is now at $4,340 per capita, regulated credit card debt is at $2,930 per capita, and regulated student loans debt is now at $4,920 per capita. Federal regulations on credit cards debt are know n to behaviourally create barriers to consumers paying this debt down and/or using credit cards prudently. Federal regulations make student loans debt exempt from bankruptcy protection, effectively forcing borrowers who run into financial troubles into perpetual vicious cycle of debt spiralling out of control. Auto loans regulations effectively create and encourage sub-prime markets for lending. So who is responsible for the debt-driven part of wealth inequality? Why, the same Government we are begging to solve the problem it helps create.

Now, add a new dimension, ignored by many followers of Mr. Piketty: today’s social & sustainability narratives risk to deliver more of the same outcome by empowering Governments to create more superficial scarcity. This does not mean that all regulations and all restrictions are intrinsically bad, just as noted before. Nor does it mean that social and environmental risks are not important concepts. Quite the opposite, it means that we need to pay more attention to regulations-induced transfers of wealth and income from the lower 90% to the upper 10% and to companies and non-profits across the entire chain of such transfers. If we want to do something about our social and environmental problems (and, yes, we do want) we need to minimise the costs of other regulations. We need to increase r(hh) and even more so, r(lh). And we need to increase the g too. What we do not need to do is increase the r(k) without raising the other returns. We also need to recognise that on the road paved with good (environmental) intentions, we are transferring vast amounts of income (and wealth) from ordinary Joe and Mary to Elon Musk and his lenders and investors. As well as to a litany of other rent-seeking enterprises and entrepreneurs. The subsidies fuel returns to physical and fixed capital, intellectual property (technological capital), financial capital, and to a lesser extent to higher quality human capital. All at the expense of general human capital.

Another aspect of the over-simplified r vs g narrative is that by ignoring the existent tax codes, we are magnifying the difference between various forms of r and the g. Take the differences in tax treatment between physical, financial and human capital. Set aside the issue of tax evasion, but do include the issue of tax avoidance (legal and practiced with greater intensity the higher do your wealth levels reach). I can invest in fixed capital via a corporate structure that allows depreciation tax claws-backs and interest deductions. I can even position my investment in a tax (non-)haven jurisdiction, like, say Michigan or Wisconsin, where - if I am rich and I do invest a lot, I can get local tax breaks. I can even get a citizenship to go along with my investment, as a sweetener. Now, suppose I invest the same amount in technological capital (or, put more cogently, in Intellectual Property). Here, the world is my oyster: I can go to tax advantage nations or stay in the U.S. So my tax on these gains will be even lower than for fixed capital. Investing in financial capital is similar, with tax ranges somewhere between the two other forms of capital. Now, if I decided to invest in my human capital, my investments are not fully tax deductible (I might be able to deduct some tuition, but not living expenses or, in terms of corporate finance, operating expenses and working capital). Nor is there a depreciation claw back. There is not a tax incentive for me to do this. And my returns from this investment will be hit with all income taxes possible - state, and federal. It is almost sure as hell, my tax rate will be higher than for any form of non-human capital investment. Worse: if I borrow to invest in any form of capital other than human capital, and I run into a hard spot, I can clear the slate by declaring bankruptcy. If I did the same to invest in human capital, student loans are not subject to bankruptcy protection.

Not to make a long argument any longer, but to acknowledge the depth of the tax policy problems, take another scenario. I join as a partner a start up and get shares in the company. Until I sell these shares as a co-founder, I face no tax liability. Alternatively, I join the same start up as a key employee, with human capital-related skills that the start up really, really needs to succeed. I get the same shares in the company. Under some jurisdictions rules, I face immediate tax liability, even if I can never sell these shares in the end. Why? Ah, no reason, other than pure stupidity of those writing tax codes. 

The net effect is the same across all of the above points: risk-adjusted after tax returns on investment in human capital are depressed - superficially - by policies. Policies, therefore, are driving wealth inequality. After-tax risk-adjusted returns to human capital are lower than after-tax risk-adjusted returns on physical, financial and technological capital.

Once again, we need to increase returns to human capital without raising returns to other forms of capital. And we need to increase real rates of economic growth (what that means in the real world - as opposed to what it means in the world of Piketty-following academia is a different subject all together). And we need to get Government and regulators out of the business of transferring our income and potential wealth from us to the 1%-ers and the 10%-ers. 

How do we achieve this? A big question that I do not have a perfect answer to, and as far as I am aware, no one does. 

One thing we must consider is systemically reducing rents obtained through inheritance, rent seeking and other unproductive forms of capital acquisition. 

Another thing we must have is more broadly-spread allocation of financial assets linked to the productive economy (equity). In a way, we need to dramatically broaden share holding in real companies’ assets, among the 90%. Incidentally, this will go some ways in addressing the threat to the social fabric poised by automation and robotisation: making people the owners of companies puts robots at work for people. 

Third thing is what we do not need: we do not need is a penal system of taxation that reduces r(hh) and r(lh). Progressive income taxation delivers exactly that outcome. 

Fourth thing: we need to recognize that some assets derive their productivity from externalities. The best example is land, which derives most of its value from socio-economic investments made by others around the site. These externalities-related returns must be taxed as a form of unearned income/wealth. A land value tax or a site value tax can do the job.

As I noted above, I do not claim to hold a solution to the problem. I do claim to hold a blue print for a systemic approach to devising such a solution. Here it is: we need sceptical, independent  & continuous impact analysis of every piece of regulation, of every restriction, of every socially and environmentally impactful (positive or negative) measure. But above all, we need to be sceptical about the role of the Government, just as we have become sceptical about the capacity of the markets. Scepticism is healthy. Cheerleading is cancerous. Stop cheering, start thinking deeper about the key issues around inequality. And stop begging for Government action. Government is not quite the panacea we imagine it to be. Often enough, it is a problem we beg it to solve. 



Wednesday, January 3, 2018

2/1/18: Limits to Knowledge or Infinity of Complexity?


Occasionally, mass media produces journalism worth reading not to extract a momentary piece of information (the news) of relevance to our world, but to remind ourselves of the questions, quests, phenomena and thoughts worth carrying with us through our conscious lives (assuming we still have these lives left). 

With that intro, a link to just such a piece of journalism: https://www.theatlantic.com/science/archive/2017/12/limits-of-science/547649/. This piece, published in The Atlantic, is worth reading. For at least two reasons:

Reason 1: it posits the key question of finiteness of human capacity to know; and
Reason 2: it posits a perfect explanation as to why truly complex, non-finite (or non-discrete) phenomena are ultimately not knowable in perfect sense.

Non-discrete/non-finite phenomena belong human and social fields of inquiry (art, mathematics, philosophy, and, yes, economics, psychology, sociology etc). They are defined by the absence of the end-of-the-game rule. Chess, go, any and all games invented by us, humans, have a logical conclusion - a rule that defines the end of the game. They are discrete (in terms of ability to identify steps that sequentially lead to the end-rule realisation) and they are finite (because they always, by definition of each game, result in either a draw or a win/loss - they are bounded by the end-of-game rule). 

Knowledge is, well, we do not know what it is. And hence, we do not know if the end-of-game rule even exists, let alone what it might be. 


Worth a read, folks.

Sunday, December 10, 2017

10/12/17: Rationally-Irrational AI, yet?..


In a recent post (http://trueeconomics.blogspot.com/2017/10/221017-robot-builders-future-its-all.html) I mused about the deep-reaching implications of the Google's AlphaZero or AlphaGo in its earliest incarnation capabilities to develop independent (of humans) systems of logic. And now we have another breakthrough in the Google's AI saga.

According to the report in the Guardian (https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours),:

"AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up."

Another quote worth considering:
"After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2."

Technically, this is impressive. But the real question worth asking at this stage is whether the AI logic is capable of intuitive sensing, as opposed to relying on self-generated libraries of moves permutations. The latter is a form of linear thinking, as opposed to highly non-linear 'intuitive' logic which would be consistent with discrete 'jumping' from one logical moves tree to another based not on history of past moves, but on strategy that these moves reveal to the opponent. I don't think we have an answer to that, yet.

In my view, that is important, because as I argued some years ago in a research paper,  such 'leaps of faith' in logical systems are indicative of the basic traits of humanity, as being distinct from other forms of conscious life. In other words, can machines be rationally irrational, like humans?..


Tuesday, November 21, 2017

20/11/17: Your Family Doc, Called AI...


In a recent post, I wrote about the AI breaching the key dimension of 'intelligence' - the ability to self-acquire information and self-replicate knowledge (see http://trueeconomics.blogspot.com/2017/10/221017-robot-builders-future-its-all.html).  And now, Chinese AI developers have created a robot that is capable of excelling at (not just passing) a medical certification exams: https://futurism.com/first-time-robot-passed-medical-licensing-exam/.

Years ago, working for IBM's think tank, IBV, I recall discussions about the future potential applications for Watson. Aside from the obvious analytics involved in finance (my area), we considered the most feasible application for AI and language-based software in... err... that's right: medicine. More precisely, as family doctors replacement. For now, Watson is toiling primarily in the family doctors' support function, but truth is, there is absolutely no reason why AI cannot currently replace 90% of the family doctors' practices.

And, while we are on the subject of AI, here is an interesting article on how China is beating the U.S. (and by extension the rest of the world) in the AI R&D game: https://futurism.com/china-could-soon-overtake-the-us-in-ai-development-former-google-ceo-says/ and https://futurism.com/china-has-overtaken-the-u-s-in-ai-research/.

Still scratching your heads, Stanford folks?.. 

Monday, October 23, 2017

22/10/17: Robot builders future: It's all a game of Go...


This, perhaps, is the most important development in the AI (Artificial Intelligence) to-date: "DeepMind’s new self-taught Go-playing program is making moves that other players describe as “alien” and “from an alternate dimension”", as described in The Atlantic article, published this week (The AI That Has Nothing to Learn From Humans - The Atlantic
https://www.theatlantic.com/technology/archive/2017/10/alphago-zero-the-ai-that-taught-itself-go/543450/?utm_source=atltw).

The importance of the Google DeepMind's AlphaGo Zero AI program is not that it plays Go with frightening level of sophistication. Instead, it true importance is in self-sustaining nature of the program that can learn independently of external information inputs, by simply playing against itself. In other words, Google has finally cracked the self-replicating algorithm.

Yes, there is a 'new thinking' dimension to this as well. Again, quoting from The Atlantic: "A Go enthusiast named Jonathan Hop ...calls the AlphaGo-versus-AlphaGo face-offs “Go from an alternate dimension.” From all accounts, one gets the sense that an alien civilization has dropped a cryptic guidebook in our midst: a manual that’s brilliant—or at least, the parts of it we can understand."

But the real power of AlphaGo Zero version is its autonomous nature.

From the socio-economic perspective, this implies machines that can directly learn complex (extremely complex), non-linear and creative (with shifts of nodes) tasks. This, in turn, opens the AI to the prospect of writing own code, as well as executing tasks that to-date have been thought of as impossible for machines (e.g. combining referential thinking with creative thinking). The idea that coding skills of humans can ever keep up with this progression has now been debunked. Your coding and software engineering degree is not yet obsolete, but your kid's one will be obsolete, and very soon.

Welcome to the AphaHuman Zero, folks.  See yourself here?..


Wednesday, October 18, 2017

17/10/17: Intel Opens the Era of Unemployed Insurance Brokers...


If you have a job structuring and selling, marketing and monitoring/managing car insurance contracts, you should stop reading this now... because, Intel has developed the first set of algorithmic standards for self-driving vehicles that aim to ensure that any accident involving a self-driving vehicle cannot be blamed on the software that operates that vehicle.

How? Read some scant details here: https://www.bloomberg.com/news/articles/2017-10-18/intel-proposes-system-to-make-self-driving-cars-blameless.

What does this mean? If successful, regulating algorithmic standards, most likely more advanced than the one developed by Intel, will mean that self-driving vehicles collision will be by system definition blamed only on human drivers, bicyclists and pedestrians. Which will, de facto, perfectly standardise all insurance contracts covering vehicles other than those operated by people. The result will be rapid collapse in demand for car insurance as we know it.

Instead of writing singular (albeit standardised) contracts to cover individual drivers (or vehicles driven by them), using actuarial risk models that attempt to identify risk profiles of these drivers, insurance industry will be simply writing a single contract to cover software running millions of vehicles, plus a standard contract to cover the vehicle (hardware). There will be no room left for profit margins or for service / contract differentiation or for pricing variation or for bundling of offers. In other words, there will be no need for all the numerous marketing, sales, investigative, enforcement, actuarial etc jobs currently populating the insurance industry. Car insurance sector will simply shrink to a duopoly (or proximate) providing cash management service to autonomous vehicles owners.

There will be lots of armchair-surfing for currently employable insurance industry specialists in the near future...


Friday, December 25, 2015

25/12/15: WLASZE: Weekend Links on Arts, Sciences and Zero Economics


Merry Christmas to all! And in spirit of the holiday, time to revive my WLASZE: Weekend Links on Arts, Sciences and Zero Economics postings that wilted away under the snowstorm of work and minutiae, but deserve to be reinstated in 2016.

[Fortunately for WLASZE and unfortunately for die harder economics readers of the blog, I suspect my work commitments in 2016 will be a little more balanced to allow for this...]


Let's start with Artificial Intelligence - folks at ArsTechnica are running an excellent essay, debunking some of the AI myths. Read it here. The list is pretty much on the money:

  • Is AI about machines that can think (in human intelligence sense)? Answer: predictably No.  
  • Is AI capable of outstripping human ethics? Answer: not necessarily.
  • Will AI be a threat to humanity? Answer: not any time soon.
  • Can the AI system acquire sudden singularity? Answer: sort of too far away and doubtful even then.
The topic is hugely important, extremely exciting and virtually open-ended. Perhaps of interest, I wrote back in 2005 about the non-linearity and discontinuity of our intelligence as a 'unique' identifier of humanity. The working paper on this (I have not revisited it since 2005) is still available here.

And to top the topic up, here is a link on advances in robotics over the grand year of 2015: http://qz.com/569285/2015-was-a-year-of-dumb-robots/. The title says it all... "dumb robots"... or does it?..

Update: another thought-provoking essay - via QZ - on the topic of AI and its perceived dangers. A quote summarising the story:
"Elon Musk and Stephen Hawking are right: AI is dangerous. But they are dangerously wrong about why. I see two fairly likely futures:

  • Future one: AI destroys itself, humanity, and most or all life on earth, probably a lot sooner than within 1000 years.
  • Future two: Humanity radically restructures its institutions to empower individuals, probably via trans-humanist modification that effectively merges us with AI. We go to the stars."
Personally, I am not sure which future will emerge, but I am sure that there is only one future in which we - humans - can have a stable, liberty-based society. And it is the second one. Hence my concerns - expressed in public speeches and blog posts - with the effects of technological innovation and the emergence of the Gig-Economy on the fabric of our socio-economic interactions.

At any rate... that is a cool dystopian pic from QZ


Dangers of AI or not, I do hope we sort out architecture before robots either consume or empower us...

On the lighter side, or may be on a brighter side - for the art cannot really be considered a lighter side - Saatchi Art are running their Best of 2015 online show here: http://www.saatchiart.com/shows/best-of-2015 that is worth running through. It is loaded with some younger and excitingly fresher works than make traditional art shows. 

Like Jonas Fisch's vibrantly rough, Gears of Power 


All the way to the hyper-expressionist realism of Tom Pazderka, here is an example of his Elegies to Failed Revolutions, Right Wing Rock'n'Roll 



And for that Christmas spirit in us, by Joseph Brodsky, translated by Derek Walcott (for a double-Nobel take):


The air—fierce frost and pine-boughs.

We’ll cram ourselves in thick clothes,

stumbling in drifts till we’re weary—

better a reindeer than a dromedary.

In the North if faith does not fail

God appears as the warden of a jail

where the kicks in our ribs were rough

but what you hear is “They didn’t get enough.”

In the South the white stuff’s a rare sight,

they love Christ who was also in flight,

desert-born, sand and straw his welcome,

he died, so they say, far from home.

So today, commemorate with wine and bread,

a life with just the sky’s roof overhead

because up there a man escapes

the arresting earth—plus there’s more space.


Merry Christmas to all!

Saturday, June 20, 2015

20/6/15: WLASze: Weekend Links of Arts, Sciences & zero economics


Couple of non-economics related, but hugely important links worth looking into... or an infrequent entry into my old series of WLASze: Weekend Links of Arts, Sciences and zero economics...

Firstly, via Stanford, we have a warning about the dire state of naturehttp://news.stanford.edu/news/2015/june/mass-extinction-ehrlich-061915.html. A quote: "There is no longer any doubt: We are entering a mass extinction that threatens humanity's existence." if we think we can't even handle a man-made crisis of debt overhang in the likes of Greece, what hope do we have in handling the existential threat?

Am I overhyping things? May be. Or may be not. As population ages, our ability to sustain ourselves is increasingly dependent on better food, nutrition, quality of environment etc. Not solely because we want to eat/breath/live better, but also because of brutal arithmetic: economic activity that sustains our lives depends on productivity. And productivity declines precipitously with ageing population.

So even if you think the extinction event is a rhetorical exaggeration by a bunch of scientists, brutal (and even linear - forget complex) systems of our socio-economic models imply serious and growing inter-connection between our man-made shocks and natural systems capacity to withstand them.


Secondly, via the Slate, we have a nagging suspicion that not everything technologically smart is... err... smart: "Meet the Bots: Artificial stupidity can be just as dangerous as artificial intelligence
http://www.slate.com/articles/technology/future_tense/2015/04/artificial_stupidity_can_be_just_as_dangerous_as_artificial_intelligence.html.

"Bots, like rats, have colonized an astounding range of environments. …perhaps the most fascinating element here is that [AI sceptics] warnings focus on hypothetical malicious automatons while ignoring real ones."

The article goes on to list examples of harmful bots currently populating the web. But it evades the key question asked in the heading: what if AI is not intelligent at all, but is superficially capable of faking intelligence to a degree? Imagine the world where we co-share space with bots that can replicate emotional, social, behavioural and mental intelligence up to a high degree, but fail beyond certain bound. What then? Will the average / median denominator of human interactions converge to that bound as well? Will we gradually witness disappearance of human capacity of by-pass complex, but measurable or mappable systems of logic, thus reducing the richness and complexity of our own world? If so, how soon will humanity become a slightly improved model of today's Twitter?


Thirdly, "What happens when we can’t test scientific theories?" via the Prospect Mag: http://www.prospectmagazine.co.uk/features/what-happens-when-we-cant-test-scientific-theories
"Scientific knowledge is supposed to be empirical: to be accepted as scientific, a theory must be falsifiable… This argument …is generally accepted by most scientists today as determining what is and is not a scientific theory. In recent years, however, many physicists have developed theories of great mathematical elegance, but which are beyond the reach of empirical falsification, even in principle. The uncomfortable question that arises is whether they can still be regarded as science."

The reason why this is important to us is that the question of falsifiability of modern theories is non-trivial to the way we structure our inquiry into the reality: the distinction between art, science and philosophy becomes blurred when one set of knowledge relies exclusively on the tools used in the other. So much so, that even the notion of knowledge, popularly associated with inquiry delivered via science, is usually not extendable to art and philosophy. Example in a quote: “Mathematical tools enable us to
investigate reality, but the mathematical concepts themselves do not necessarily imply physical reality”.

Now, personally, I don't give a damn if something implies physical reality or not, as long as that something is not designed to support such an implication. Mathematics, therefore, is a form of knowledge and we don't care if there are physical reality implications of it or not. But physical sciences purport to hold a specific, more qualitatively important corner of knowledge: that of being physically grounded in 'reality'. In other words, the very alleged supremacy of physical sciences arises not from their superiority as fields of inquiry (quality of insight is much higher in art, mathematics and philosophy than in, say, biosciences and experimental physics), but in their superiority in application (gravity has more tangible applications to our physical world than, say, topology).

So we have a crisis of sorts for physical sciences: their superiority is now run out of the road and has to yield to the superiority of abstract fields of knowledge. Bad news for humanity: deterministic nature of experimental knowledge is getting exhausted. With it, determinism surrounding our concept of knowledge diminishes too. Good news for humanity: this does not change much. Whether or not the string theory is provable is irrelevant to us. As soon as it becomes relevant, it will be, by Popperian definition, falsifiable. Until then, marvel of the infinite world of abstract.