The Audience is Dumb

Many years ago, I worked for a professional services firm. The firm had bought the right to write a series of articles in a noted industry journal. When the partner responsible for these articles was injured, I was brought in to work on them. I worked with a former News Ltd journalist to try to turn the words of various consulting, audit and tax professionals into usable articles.

However there was a problem. The experts wanted only to write for other experts. And the audience for this publication were generalists. So the ex-journo & I would sit down with several hundred words of expert stuff and say “why would anyone want to read this?”
We would then try to massage the words into something that someone other than the author would want to read. This would enrage the authors, who were very clever, and being very clever, viewed our cosmetic surgery of their vowels and consonants as an assault on their cleverness. We would have to ask them questions to try to draw out the useful stuff buried in their layers of expertise. This could be a painful process.
Eventually the article would arrive at a readable state and it would be published. The names of the experts would appear but not ours. Which was fine, because that’s the nature of being an editor.
Editors are midwives (with all the yelling & screaming we had to have been), easing ideas into the light.
And editors are also stand-ins. Representatives of the potential reader, there to keep the authors focused on what was important – and ensuring that their writing leaves a mark on the world.

| Leave a comment

Cyber Realism

An NYT interview with Ev Williams of Twitter and Medium has prompted some internet soul searching.

“I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place,” Mr. Williams says. “I was wrong about that.”

Paul Wallbank wrote a response to it. I got to thinking about Cyber-Utopianism and the political thoughts roiling around my mind and I started to wonder what a Cyber-Realism would look like. I left some comments on Paul’s blog. I have highlighted four to write about further. What follows is not a coherent argument, rather it is a stream of sense data (or “rant”). All of the feels. There will be coherent arguments next – or so I am telling myself.

Was I a Cyber-Utopian? Well, I am too bitter & twisted to be an anything Utopian but I was captivated by the world-wide web when a friend showed it to me in a university college attic in 1993. I gravitated towards the web-based chat boards of the NME and Barbelith in the late-90s. I started blogging in 2001 as I left the UK and have done so fitfully ever since. For me it was less about the technology than the people it connected me to. I am… strange. Far less strange than I used to be, and far more comfortable with the strangeness that remains, but strange none-the-less. And the people I found, and continue to find, online were not the people I grew up with or met at college or work. One reason people move to a big city is to meet new people. The internet cosmopolitanised my mind.

In the late 2000s, the world woke up to this stuff and I reckoned there might be a dollar to be had here. I was less a Cyber-Utopian and more a Cyber-Opportunist. I earned a bit here and there but I was neither smart enough nor focused enough to gain riches. But it was still fun.

Something changed around 2012. Probably around the time of the Facebook IPO. It felt as though the internet was finally mainstream and properly corporate. It was no longer this cool, counter-cultural place full of interesting people. They were still there but harder to find – what with everyone else around. If this makes me sound like a burnt-out hippy crying into his beer at 11pm, 4 November 1980 – then yeah, that’s about right.

  1. Human beings are apes. We are collaborative, competitive, and tribal.

If we are talking about the internet, then we must talk about people. People are animals. And I don’t mean that pejoratively. We are primates who form social groups. Within those social groups we collaborative and compete, we gossip and we groom, we fight and we f**k. We define ourselves by the group we are in, swear allegiance to our kin within, and death to the enemies outside. Evolution means that we are hardwired to survive in a hostile, resource-constrained environment. And to our ape eyes, all environments are hostile and resource-constrained. If we found paradise by mistake, we would probably bare our teeth at it and mark it as territory with piss or blood.

We might have wi-fi and routers and smartphones and apps and yearn to leave our hairy, sinewy ape bodies but we cannot. We take the savanna and the tundra into cyberspace. We do what comes naturally. Ape is what we are. We can be no other.

  1. Human beings are broken. We cannot be fixed by either technology or ideology.

We remain apes with aspirations. Technology cannot change this unless we so drastically change our bodies that we are no longer human. We might tame, train, and constrain ourselves with technology – wearables and apps to augment our willpower and limit our harmful behaviours. The Quantified Self as Personal Panopticon. We might hope that allowing groups to communicate would reduce hatred – although miscommunication was not the reason that our ancestors slaughtered each other. Technology can only do so much.

We had hoped that some form or ideology or social engineering would fix everything. Efficient Markets would enable trade between individuals and groups, allow price discovery, unleash innovation, allocate risk and reward with a near-divine fairness. It didn’t turn out like that. A Dictatorship of the Proletariat would eliminate oppression, famine, and war. It didn’t turn out like that. If we liquidate the treacherous others of different colours, creeds, sexualities, politics then our purified Nation will become Strong. It didn’t turn out like that. Ideology is a good question on which to start and a terrible answer on which to end.

Broken is what we are. For now, we can be no other.

  1. Human beings are creative. We will use technologies is ways that we cannot predict.

It’s not all bad news being an ape. You can have ideas in your brain. You can make words with your teeth and lips and breath. You can paint those words leaves or hack them into rock. You can make tools. Out of flint or obsidian. Or finely-machined silicon. With these tools, we can transform ourselves and our environment. We can change the ecologies of an entire planet. We can drive other species to extinction without noticing. We can also make skyscrapers and action men and dams and cars and malaria nets and planes and tanks and missiles and my little ponies and stuff. We can tweet amusing pictures of startled cats that make a stranger on another continent smile (or we can threaten to rape them because of a word they said). We can take military equipment invented to fight wars and use it to play games. Or we can do the reverse.

We are a creative species. We do not fully understand that creativity – its drives and its operations, its logics and its deliriums. We pull back bits at the edges to package in over-priced workshops or books with one-word titles, but we still operate in the dark.

Creative is what we are. We can be no other.

  1. We hope not because we believe that we can fix human beings but rather because human beings excel at making new mistakes – or what some call “change”

I am contractually obliged to find some cause for optimism. I have a son. I wanted a child for purely selfish purposes – because it’s part of being human (not a necessary part but a part none-the-less). However now that I have him, I can neither lie to him and say that everything will be fine but nor can I just pray for the end of the world and cast him into despair. Not when there are seagulls to be chased and puddles to be jumped in and sonic screwdrivers to be waved all over the place.

I do not know if we will make it. Or if the world will be fit for him. But I do know that human beings excel at making things and doing things and especially at f**king things up. And while most of those mistakes will be old mistakes, a few of them will be new ones. Some of them will turn into something new. CERN employees trying to fix documentation issues for nuclear physicists might create something that 25 years later is used for cat pictures, porn, and Russian propaganda.

If we can find a way to keep on making new mistakes, we might be able to f**k up our way out of this f**k up that we are in right now. Get to it people. I know that you’ve got it in you.

This is part of the Into The Maelstrom series.

| Tagged , | 4 Comments

Into The Maelstrom

“I no longer hesitated what to do. I resolved to lash myself securely to the watercask  upon which I now held, to cut it loose from the counter, and to throw myself with it into the water.”

As with Poe’s protagonist, I have put this off for as long as possible and that is now. There will be a series of blog posts about politics and technology. As they are written, links will be added to this page. As I work out what the hell this is all about, I will rewrite this page. But each post will not be edited further.

  1. Cyber Realism
  2. Entropy
  3. The Only Game In Town
  4. Broken Windows Theory
  5. Crossing the Catastrophic Fold
  6. Consent
| Tagged | Leave a comment

What if Artificial Intelligence doesn’t take over the world?

There’s a lot of concern about Artificial Intelligence (AI) taking over the world from Silicon Valley types. In fact, for some of these people, the threat of imminent AI dominance makes global poverty look like a rounding error. It’ll be like The Terminator but if Apple are involved, the robots will be more elegantly designed (more Cate Blanchette than Arnold Schwarzenegger).

However most of these technologist also want AI to take over the world. Not in the sense of annihilating all humans – just in the sense of solving all our problems. At the moment, AI and Machine Learning (ML) are mostly jhelping Netflix to serve slightly more entertaining films to us or permitting us to ask Alexa to order a fidget spinner*. However in the future, AI will be driving our cars, doing our jobs, and probably drinking our tea when our backs are turned. AI will do everything. AI will solve everything. Conflict? There’s a bot for that. Death? Sure, we solve for that…

Now this is all awesome yeah? Well, not always. ML is dependent on this stuff called “training data” – basically data that allows the ML to identify patterns that it then models its decision-making on. A lot of that data comes from people. And people can be a**holes. Which means that algorithms can be a**holes too. Racist a**holes in fact. So we may be putting our fates in the hands of entities modeled on ourselves. Does that fill you with confidence?

On the other hand, we’ll get over that, right? There’s an app that fixes racism, isn’t there?

There is a default assumption among technologists that all problems can be solved and that this solution is generally a cool piece of technology. This is not necessarily true. Our challenge is that despite all the neat gadgets, we are basically apes. We have ape-ish desires (for food, sex, love, power, a tribe to be a part of) that we are able to channel into complex, baroque civilisational structures. But we are nevertheless still apes. We solve one problem and create another (we die of hunger much less than we used to but now many of us are obese). Now that we have wonderful communication technologies, we use them to argue with each other in ever more vociferous ways**.

So what if AI doesn’t take over the world? What if it only partially solves our problems but leaves us mired in the challenges of politics, economic inequality, and conflict that we apes have always had? What if it only makes us marginally more efficient a**holes? We will have to do what we’ve always done and sort this mess out ourselves…

*That reference is going to date so fast.

**If you want to see a compelling argument in favour of human extinction then simply visit Twitter.

| Tagged , | Leave a comment

ISKO Singapore: Governance

This Friday. In Singapore. We are talking… Governance. Yeah. I know. Delegation. Responsibilities. Policies. Measurement. Risk Mitigation. The whole ball o’ wax.

Details of the event – with more video and content – available here.

Video | Posted on by | Tagged | Leave a comment

Review: Beyond Belief by Hugh Mackay

Beyond Belief by Hugh Mackay is about religion in Australia – or perhaps “the artist formerly known as religion”. Formal religious observance in Australia has been dropping since the 50s, a significant proportion of the population say they have “no religion” to census survey questions and many of those that call themselves Christian only attend church on Christmas and New Year.

To the devout, this is a problem to be remedied. However Mackay is both a self-described “Christian Agnostic” and social researcher / cultural commentator who has plied his trade for decades. He has interviewed many Australians over the years and takes their beliefs (or lack of them) seriously. The book is peppered with quotes from the interviews and these portray a complex, paradoxical state of affairs – with people sating their need for believing, behaving, belonging, meaning, identity, ecstatic experience, and comfort in a myriad of ways.

The two standout chapters are on the relationship that Australians have with organised Christianity – tl;dr version: it’s complicated – (“Anyone for Church?”) and what the heck Spiritual But Not Religious means (“SBNR”). Richly described and empathetic, they reach into underexplored facets of Australian life.

The later chapters in the book resonate less with me. In them, Mackay discussed the nature of god, evidence for the existence of a deity, and ends by laying out his moral and ethical vision for Australia (the spirit of loving kindness). The agnostic takes to the pulpit. It’s not that I disagree with his point of view. Rather, as he moves away from his field work, his observations become less distinctive and grounds and more Christmas Card. It seems that writers on religion and society are not tempted with the Fruit of the Tree of Knowledge of Good and Evil. Instead the serpent offers the Indigestible Chicken Nugget of Theology.

Nevertheless, a book well worth reading.

| Leave a comment

Enterprise 2.0 – 2006 rewind

Last week, Jive – the enterprise collaboration software company – was finally acquired. This was announced on the at the annual JiveWorld event – which is a little bit like announcing to the family and friends who’ve come to your 40th birthday party that you’ve just got married to a total stranger in a Las Vegas chapel and they should totally be happy for the both of you.

I do not know much about ESW Capital or the portfolio of software companies that they own but I do know that Jive’s plans for the future pretty much consisted of being bought by someone. Time will tell if the match is a good one.

But that’s not why we’re here. In the wake of this news, James Dellow posed the following questions: “A few people have also suggested that it also flags the end of the Enterprise 2.0 era… If we accept that as true and we held a retrospective on E2.0, what would you say about it? … Or do you disagree – is the dream of E2.0 still alive and well?”

Always being up for a challenge, I went back to the original Andrew McAfee article on Enterprise 2.0. Here’s the opening sentences:

Do we finally have the right technologies for knowledge work? Wikis, blogs, group-messaging software and the like can make a corporate intranet into a constantly changing structure built by distributed, autonomous peers — a collaborative platform that reflects the way work really gets done.

Well the answer to that first question is pretty simple: No, we don’t. Or rather, the technologies that we had in 2006 did not satisfy end users and their needs for collaborative knowledge work. Nor do the technologies we have now in 2017 for that matter. This is partly about the technologies and partly about the users and their expectations but more of that later.

Lets begin by looking at the tech and going through the 6 technical capabilities that made up McAfee’s Enterprise 2.0:

  1. Search. Search on the web is awesome. Search inside organisations sucks. This is still the case. The thing here is that this is not really a technical problem. We have had the technology to dramatically improve the findability of information within organisations for years and the knowledge of how to do so – but we just haven’t bothered to actually do it. This is a problem of ineptitude rather than ignorance. Search continues to be underloved in corporate environments.
  2. Links. Google came to fame with the PageRank algorithm – which dealt with the problems of search by looking at the links between web pages rather than simply the words on them. McAfee’s point is that in an environment where lots of people author content that links to other content, links become better therefore search becomes better. Enterprise information environments are becoming more highly linked – but not through the methods that McAfee proposed. People don’t create documents with lots of links. However data that sits in multiple systems is being linked together – enabling not only better retrieval but deployment of that data in new forms and contexts.
  3. Authoring. If we give people in organisations blogs and wikis, verily they will write lots of stuff in these blogs and wikis. Nope. The vast majority of them won’t. Unless we pay them to.
  4. Tags. And then people will tag their stuff and other people’s stuff with lots of handy keywords. Nope. Most of them won’t do that either. Folksonomies get a reference. Ah the 2000s – such innocent days*!
  5. Extensions. Not hair extensions. Not exam extensions. Not exam extensions caused by hair extensions gone wrong. Instead – “automating some of the work of categorization and pattern matching” through recommendation engines and the like. A decade on, we are just starting to see this become a reality with search-based applications and AI.
  6. Signals. Letting people know what’s going on. With a funky, new technology like RSS. We do that but in different ways to the ones imagined in this article. Email is still… everywhere. Activity feeds became popular in the wake of Twitter and chat has come back in a big way in the last few years (hello Slack). I don’t think we have figured out how to make this work yet.

The article is noteworthy for the lack of any reference to Facebook (founded 2004), Twitter (yet to be launched in April 2006), or even MySpace (founded 2003). It’s dealing with a an earlier set of technologies that emerged in the consumer space and then were used by some inside corporations. A harsh reading (overly harsh) is that it’s already out of date on publication. A more generous reading is that it is an honest attempt to tackle a messy, emergent environment at a point in time. And that’s an inherently risky activity.

McAfee also identified 2 “threats” to Enterprise 2.0.

The first is that busy knowledge workers won’t use the new technologies, despite training and prodding. Most people who use the Internet today aren’t bloggers, wikipedians or taggers. They don’t help produce the platform — they just use it. Will the situation be any different on company intranets? It’s simply too soon to tell. The second threat is that knowledge workers might use Enterprise 2.0 technologies exactly as intended, but this may lead to unintended outcomes. Intranets today reflect one viewpoint —that of management — and are not platforms for dissent or debate. After blogs, wikis and other voice-giving technologies appear, this will change. However, the question remains: Will the change be welcomed?

The first threat proved to be absolutely correct. Most employees just want to do their jobs and these platforms were often considered “extra work”. Where they were integrated into the flow of work, they were successful. The second threat played out slightly differently to how it was phrased here. There wasn’t that much change to be welcomed or rejected.

Some of the more florid rhetoric of Enterprise 2.0 (which in this article is hinted at but largely absent) claimed that these new technologies would tear down corporate hierarchies and reconfigure organisations as kinder places. I don’t think they did. Timing may have been a factor – a couple of years after this article, the US economy nearly collapsed, millions of people lost their jobs, and tangling with your boss wasn’t top of the priority list. But, in general, I think the forces that create and maintain hierarchy in organisations are too strong to be shifted with blogs. On this point, I largely agree with Jeffrey Pfeffer.

Enterprise collaboration has still not been “solved” (see the interest in Slack, Facebook Workplace, the million different ways of collaborating in Office365) and that story feels like it has a way to go. However  the cluster of user generated content technologies that came to the fore in Web 2.0 are no longer where its hot.

The wheel turns – like Gartner’s Hype Cycle, like time eating away at your balance sheet and your technical debt, like the “Settings” cog icon on every bit of software today.

*Apart from the mass terrorism. And the wars. And the global financial collapse.