Category Archives: Internet

Computers can be designed to augment human ability

Computers have found their way into nearly every corner of our lives, but in the last few years we have begun to realise they can do much more than just process information. They are beginning to help us directly and indirectly understand how other systems operate: how the brain learns, remembers and recalls information; how DNA is the blueprint for human beings. The work being done in this field was outlined in the Autumn 2015 edition of ITNOW.

In the first article of the series, we explored what we are beginning to learn from comparing and contrasting the brain and computers, and the benefits both communities can gain from studying the basic architecture of each other’s systems.

However, computing can offer us an even greater prize. Professor Daniel Dennett, arguably the world’s leading philosopher of cognitive neuroscience, has sparked an interdisciplinary debate on the novel potential of computers: ‘How can we communicate directly with our computers as partners to extend everyone’s personal ability to learn, understand, master complex topics and even learn to be more intelligent?’

Unlimited opportunities
Alan Turing argued in his 1936 paper, ‘Computable Numbers’, that if we could work out the formula to solve any problem then we could write a program which would enable a general purpose electronic device to execute that algorithm and so solve the problem. As early as 1956, a group at Dartmouth College in the USA argued that ‘all we needed was to define intelligence and we could build intelligent computers’. This statement is true.

Access to the world’s knowledge
Humans have been amassing and disseminating information since the beginning of recorded history, from ancient wall paintings and the great Greek library of Alexandria, then encyclopaedias and dictionaries through to Wikipedia and scientific literature. The challenge lies in turning all this information into knowledge.

Learning
In the previous article we explored the central function of every cognitive sentient brain: the ability to store and recall information; to create memory. We are reasonably sure that we know how the brain grows the trillions of new links and structures to be found in a mature brain, and the process is quite automatic. Every time we use these links, we cross reference them to other concurrent activities, continuously strengthening them, integrating them and incorporating them into our neural networks.

Structure of information
We can learn something else that is very useful from our computers. Every digital datafile can be interpreted as a program, or facts, or algorithms, or pictures, or music, or whatever we chose. To the outside observer all files appear identical.

Getting of the mobile app problem

As the planet increasingly plugs into mobile, a new generation of apps is changing how we consume media, how we shop, how we spend our time and how we communicate with each other.

It is hard to imagine that it was only in July 2008, that the Apple App Store was launched – a year after the first iPhone was released. At the time it had 500 apps and, to many, it was a revelation. 10 million applications were downloaded in the first weekend alone. Now as apps have become so much a part of our everyday lives it is almost impossible to accept that this is still, in fact, a very immature industry – especially in the enterprise.

While enterprises are just starting to scratch the surface of the potential of mobile, almost all have grasped that mobile is an opportunity to drive income and competitive advantage. In a new survey undertaken by Opinion Matters and sponsored by OutSystems, over 200 UK and US respondents were asked about the primary goal of their new mobile app initiatives. The top aim cited was to generate revenue (64 per cent).

The explosive growth that we are witnessing in mobile is driving a deluge of mobile app requests in the enterprise and I know that CIOs are already struggling to keep up with demand. But as pressure for mobile app developers grows, so demand will outstrip supply and companies will be challenged in hiring mobile app developers.

Today, we already know that the country is experiencing an IT and digital skills shortage, so where are the skills coming from for new mobile developer hires? Our research showed that 63 per cent of respondents already had between 11 per cent and 25 per cent open vacancies for developers as a percentage of their current team size. Twenty nine per cent had between 26 per cent and 50 per cent open vacancies. Only a very small percentage (6 per cent) advised that they have no open vacancies due to a shortage of developer skills.

So what do you think the knock on effect will be on day rates? If you are looking to hire Java, JavaScript or .NET developers, how much are you going to have to pay for these guys (presuming you can find them in the first place)? Likewise, what impact will not hiring have on your business and your team if you can’t get these much-needed resources in?

According to our research 85 per cent of those surveyed noted that they already have a mobile backlog of between one and 20 applications, with half (50 per cent) having a backlog of between 10 and 20 apps. Growing backlogs will not only damage revenue opportunities, it will also impact on your competitive advantage and stop you from meeting growing user and employee demand.

Let’s face it, employees are becoming divas. They want access to their apps and their devices anytime, anyplace. I know I want the same experience in the workplace as I get from Amazon, for example. The way I use apps in my daily life is the way I expect to use them at work. I want the same seamless journey and the ability to access all my apps on whatever device I choose to use and the business needs to cater for this.

Shaping the internet together

I had the opportunity to speak for BCS on two panels at EuroDIG on aspects of data protection and privacy – one related to identity and payments and the other to big data analytics and the internet of things.

European Digital Single Market initiative
The proposed European Digital Single Market is a very ambitious set of proposals intended to be implemented in an (unrealistically) ambitious timescale by 2016. A European Commission spokesman estimated that full implementation of the Digital Single Market would lift EU GDP by 15 billion euros. It was claimed that EU regulations (such as the roaming regulation) had already reduced cross-border communications costs significantly.

Building on NETmundial
All 47 members of the Council of Europe (except Russia) agreed to take the NETmundial principles forward as the single set of principles for internet governance and to use these as the basis of the global Internet Governance Initiative. This is in accordance with the calls over the last three years at the UN IGF (which is the long term forum – NETmundial was a one-off conference) to agree a global set of principles and is a major step forward for internet governance.

Cross-border internet law
There was an update on progress with the Internet and Jurisdiction Project.

Article 10 of the Human Rights Act refers to freedom of expression and has been interpreted by the European Court of Justice to cover both content and the means of transmission across borders within the 47 countries signed up to the Council of Europe.

ICANN and the IANA stewardship transition
A detailed update was given by officials involved in the IANA stewardship transition. This is the change in the internet numbering and naming regime from oversight by one government (USA) to oversight by all.

In simple terms, on a technical level there was agreement that what was done now worked well and little needed changing other than the oversight. The numbers aspects of transition have been agreed with minor changes and this is currently out for public comment until August.

Cybersecurity
It was widely agreed that cybersecurity is a key element in sustaining a sound IT society (including privacy and freedom of expression – see below). It was acknowledged that states are now developing military cyber capabilities (a fact that was emphasised by the Under Secretary at the Ministry for Foreign Affairs from Estonia and debated robustly with Russian officials) and that cybercrime, which is frequently trans-national, is growing.

Data protection, privacy and the IoT
EuroDIG was greatly concerned about freedom of expression, journalistic freedom and privacy. There was great support for the response to concerns about privacy and surveillance in the UN through the adoption of resolution 68/167 as a result of which the General Assembly requested the High Commissioner for Human Rights to prepare a report on the right to privacy in the digital age.

A new culture of data sharing

At the moment it feels like we either choose not to participate in modern life, or submit ourselves to corporate whims and mistakes, says David Evans, Director of Policy & Community at BCS.

Is this the lot of the modern man? The tragic disclosure of their affair online, the violation of their person through identity theft. Their data taken, used, lost by corporations so faceless, so uncaring. Subject to the vagaries of outrageous privacy policies. Their right to be forgotten something devoutly to be wished for? Is our digital world just another stage for human anguish?

Perhaps. Yet that is not how most of us act; for most of us the prospect of harm from data sharing is abstract and somewhat disconnected from our experience. Surveys regularly indicate that around three quarters of us will share our data if there is some perceived benefit – and sharing data for free services has been one of the most successful business models of the internet; a business model that generates a lot of the content we enjoy, and arguably makes the internet function.

It would be terribly easy to respond to disasters by calling for public awareness about the dangers of sharing data, but there are three major reasons why that has little utility. Firstly, information security expertise does not protect you from foolishness on the part of others. Secondly, like it or not, choosing not to share will increasingly mean choosing not to participate in society. Finally, it misses the most vital point of all: sharing personal data is good.

What we need to have front of mind is that sharing our data is a necessary and desirable social and economic function, and that personal data is at its most socially useful and economically powerful when it is aggregated. Allowing BMW to tell you all about cars that might suit you when you’re in the market for a new motor is good for you and them.

Helping John Lewis to better understand what you might like to buy in future is helping them to help you. Having the NHS collate and use very specific bits of data about you – even without your consent – may well save your life and the lives of your children, and cost you less in taxes. We need to make this work for our collective benefit.

Sadly, our current path is in the opposite direction; sharing personal data is not working for anyone particularly well, and it is in danger of getting a lot worse.

We are learning to lie and obfuscate as consumers, and businesses are using ever-more invasive techniques to learn about us, while having to spend more to deal with the messy data we give them. Corporations, governments and consumers are moving from a police action into a de facto state of war over data. As the ‘internet of things’ – an explosion of internet connected devices and sensors – becomes a reality and enters your home, the amount of personal data that’s available will explode, so will the potential benefit, and so will the problems.

State of play: Social media in the enterprise

Social media is not very old. Twitter started in 2006 and now boasts 302 million users; Facebook was launched in 2004 and by 2015 has 1.18 billion monthly users; LinkedIn, a venerable old man of social media launched in 2003 now has 364 million users. How well are they being utilised by business? Brian Runciman MBCS reports.

Businesses quickly latched onto the IT-driven phenomenon of social media – with motivations ranging from better measurement of interaction to simply taking advantage of a bigger shop window. One attraction is its ability to measure things that were more intangible in times past such as customers’ sentiment about a product and also for more time-honoured business pursuits such as analysing the competition and helping determine strategy.

What is more difficult to assessing real return on investment (ROI). Revenue data is difficult to come by, if not just irrelevant in some social media contexts. It has seen success, however, in measuring conversion of a potential customers from passive users into a subscribers. And, according to Women’s Wear Daily, a line can be drawn between engagement on social media and effective predictions of ROI. Just the goal of increased interaction with a brand could indicate increased purchasing potential.

A very interesting academic paper in KSII Transactions on Internet & Information Systems looks at the potential inherent in the large amount of unstructured text data in social media that show consumers’ opinions and interests. The writers attempt to formulate a comprehensive and practical methodology to conduct social media opinion mining and then apply it to a case study of the oldest instant noodle product in Korea.

The way they represent the output of this study shows the variety of information types available – they use graphical tools and visualised outputs that include volume and sentiment graphs, time-series graphs, a topic word cloud, a heat map and a valence tree map. The sources mined are public-domain content such as blogs, forum messages and news articles, which are analysed with natural language processing, statistics, and graphics packages.

This kind of business intelligence is becoming more and more important – going beyond simply measuring thumbs on a Facebook campaign to give actionable data.

Inside the enterprise, too, social media tools are helping. The Journal of Business Communication presents results from a survey of 227 business professionals on attitudes towards the use of social networking for team communication, and its frequency of use and perceived effectiveness compared to other communication channels.

Whilst it shows that traditional channels still hold more sway at present, they also point to a sea change for Gen X and Gen Y business professionals, who are quite likely to consider that social networking tools will be the primary means of team communication in the future.

Another recent academic paper looks at the concept of continuance – which, in the IT context, refers to sustained use of a technology by individual users over the long-term after their initial acceptance. It shows that social media may continue to grow in the business context simply because it is enjoyable to use.

Information management

Information management (IM), as it’s normally understood, is really about the management of information technology, or perhaps data management and software tools. Similarly, the chief information officer (CIO) role isn’t really about information either; it’s about technology.

Although, when the role was first created following the 1977 report of the U.S. Commission on Federal Paperwork (chaired by Forest W. Horton and otherwise known as the Horton Report), it really was about the management of information as a strategic resource, rather than the technology management role it later morphed into.

What I want to look at here is a much wider understanding of information and a much broader concept of information management, what I’ll call authentic information management (AIM). Let’s consider Pareto’s 80/20 principle which states that, for many events, roughly 80 per cent of the effects come from 20 per cent of the causes so it wouldn’t be too surprising if just 20 per cent of all information in organisations is actually useful.

The rest is useless or less than useful. If true, that’s a huge waste of resources and a big drag on efficiency. Not only that, but the less-than-useful stuff is blocking out the useful, and this has big implications for overall, systemic effectiveness – not to mention people effectiveness.

For example, back in 1955, the British chain department store Marks and Spencer (M&S), undertook a famous information reduction exercise called Operation Simplification in response to rising overhead costs and falling profits. The well-documented end result was reported to have been an 80 per cent reduction in paperwork!

But the reduction in paperwork didn’t just convert into cost savings. It was also reported at the time that there was evidence everywhere of a hidden treasure of ability and creativity that had been unlocked.An authentic CIO
So how much effort, time and resource is spent on data management compared with information itself? The former is easier to get your head around because it’s specific, it’s tangible, and there are software tools for it. Of course effective data management is vital, particularly for data quality, because it supports information reliability.

But it may be that authentic information management (AIM) is the next frontier in making effective use not only of information and communications technology (ICT) in organisations, but also of information itself and as a whole.

So how do you go about enabling AIM?
The first thing might be the appointment of an authentic CIO, meaning that they will have overall responsibility for promoting the effective use of all information in the organisation as a strategic resource, with the present CIO re-named the chief systems officer (CSO) responsible for the overall management of information systems and business processes.

Wireless wave of change

This won’t be a gradual evolution. It is being driven by users and will be a fundamental change, similar to some of the other seismic shifts in computing such as the arrival of PCs, which freed users from the dominance of mainframes.

A key driver behind these developments is the introduction in 2013 of the new wireless standard 802.11ac, followed in the next couple of years by 802.11ad. These standards will fuel the increase in mobile devices and BYOD, leading to wireless becoming the status quo instead of wired.

There are many elements supporting this change. 4G with faster and bigger data-handling capabilities will drive expectations in the office. The growing deployment of mobile IPV6, with its significantly enhanced capabilities, enables better roaming. Cloud and virtualisation shift both the perception and the very nature of company boundaries, making mobility even more relevant.

The real dilemma is how do you secure, implement and manage what you don’t know? Already developments such as learning apps, Google Glass, payments from mobiles, Tizan 2.1 (multi-device operating system) and CloudOn (which allows users to run business apps on their mobile in the cloud) are all throwing up new areas to be defined and incorporated into security policies. Over the next few years, there will be many more innovations that will directly impact organisational structures and security.

Challenges
One major challenge for IT managers is how to navigate their way through a fluid and fast-evolving situation where network infrastructures are changing rapidly and where it’s very hard to predict what the changes will be.

Questions arising include how to develop the network so users can get the best productivity and other benefits from existing and new mobile devices. How do you go about moving to wireless in a cost-effective way, with the least disruption to the business? How do you track and manage the growing number of mobile devices? How do you maintain control of the network? And how do you keep the network secure in this rapidly changing environment?

The move to wireless
The new wireless standard 802.11ac provides initial WLAN throughput of at least 1Gbps and up to 7Gbps in the future. 802.11ad, with multi Gbps throughput, will provide up to 7Gbps when it is ratified and introduced. And 4G will provide up to 100 Mbps mobile. This gives the potential for radically improved wi fi performance over what is available in the workplace today.

Many wireless deployments to date have been tactical, with more access points added, often unstructured, to meet increasing user demand or deal with cold spots. Usually, they have been neither fully pervasive nor capable of handling multi-media, high-volume and high-density traffic. Of course, they are based on the higher range of the old 2.4 GHz access points.

802.11ac will deliver the unfulfilled promise of 802.11n, but with a focus on 5GHz rather than 2.4GHz. With 5GHz providing shorter range but higher throughput, existing access point (AP) – based systems will be inadequate for the new requirements.

Migrating to 802.11ac will require entirely new APs, new antennas, upgraded or replaced controllers and new switches or power over ethernet (PoE) injectors. Similar to the evolution of 802.11n, there will be multiple versions and phases of 802.11ac. For some organisations, this will mean a rolling deployment, with the associated configuration and security risks.

An increasingly popular alternative to the AP approach is the modular array approach. With this method, an array can hold multiple, directionally tuneable APs. Unlike traditional broadcasting, directional focus minimises interference and enables clear control over geo overspill.

This is particularly relevant given the challenges that 5GHz and beyond will create for the old AP-based approach to coverage. With 2.4GHz, providing more coverage typically involves adding more APs. However, that has been shown to be increasingly self-limiting because interference between APs reduces coverage, rather than increasing it.

A major benefit of an array-based or directional-based approach is that it can be easier to upgrade as traffic usage and capacity evolve, allowing companies to react swiftly to changing circumstances. Key to success in adopting or extending wireless networks will be deployment pre-planning, risk assessment and determining the applicable policies.

Social political engagement

Existing social media platforms aren’t providing effective political engagement, because they weren’t designed to. BCS has been calling for a purpose-built platform to improve meaningful communication between the public and MPs. James Davies looks at the work to be done.

The way that many of us live our lives online nowadays is naturally spilling over into the way people engage with politics and with politicians. Accompanying the rise of online campaigns, e-petitions and political memes, the internet and social media specifically is shifting the ways in which citizens engage with their elected representatives.

This shift is as fundamental as the one that came with the advent of radio or television. Huge numbers of citizens have taken to social media platforms to communicate with their local MPs, but with wildly varying levels of success. Some MPs try to avoid digital communications altogether. Others struggle to manage the immense volume of direct public engagement made possible by social media channels. Many receive daily abuse or even death threats online.

Soon after being re-elected in June, Conservative MP Ranil Jayawardena let his constituents know he would not be using Twitter anymore, ‘because it has become a platform full of trolls, extremists and worse’, which he felt was producing a climate of fear for his colleagues and his constituents.

Not fit-for-purpose
Social media companies are not the enemy here; the problem is that these platforms received an average of 10,000 messages every day, while others received fewer than five a day. This obviously presents huge potential inconsistencies between MPs’ abilities to respond to members of the public using this medium. Political engagement online is not functioning in a manageable or societally beneficial way.

Joined-up approach
No one party can – or should – be responsible for this, and so BCS and Demos are calling for a cross-party allegiance to work with existing social media platforms to improve their offerings. A solution to the current situation would be a purpose-built platform established to facilitate meaningful and effective political engagement online. BCS and Demos have written to all mainstream political parties asking them to work with us and each other to address the issue.

Online political engagement is here to stay, and questions around how well it is serving our political process will only increase over time. We now have the chance to get ahead; to give proper consideration to how the situation can be improved and make IT better for society.

Make the Web Better for Everyone

The Web has serious problems: peddler of unreliable information, haven for criminals, spawning ground for irrational conspiracy fears, and tool for destructive people to broadcast their violence in real time and with posted recordings.

No doubt your list of Web pathologies is different from mine. But surely you agree that the Web disappoints as much as it delights.

Now the hard part—what to do about it?

Starting over is impossible. The Web is the ground of our global civilization, a pillar of contemporary existence. Even as we complain about the excesses and shortcomings of the Web, we can’t survive without it.

For engineers and technovisionaries, the solution flows from an admirable U.S. tradition: building a better mousetrap.

For redesigners of the broken Web, the popular impulse is to expand digital freedom by creating a Web so decentralized that governments can’t censor it and big corporations can’t dominate.

However noble, the freedom advocates fail to account for a major class of vexations arising from anonymity, which allows, say, Russian hackers to pose as legitimate tweeters and terrorist groups to recruit through Facebook pages.

To be sure, escape from government surveillance through digital masks has benefits, yet the path to improved governance across the world doesn’t chiefly lie with finding more clever ways to hide from official oppression. More freedom, ultimately, will only spawn more irresponsible, harmful behavior.

If more freedom and greater privacy won’t cure what ails the Web, might we consider older forms of control and the cooperation of essential public services?

In the 19th century, railroads gained such power over the lives of cities and towns across the United States that norms, rules, and laws emerged to impose a modicum of fairness on routes, fares, and services. Similarly, in the 20th century, the Bell telephone network, having gained a “natural” monopoly, came under the supervision of the U.S. government. So did the country’s leading computer company, IBM.

Because of government limits, Bell stayed out of the computer business—and licensed its revolutionary transistor to others. IBM’s management, meanwhile, felt pressured by the government to “unbundle” software that came free with its computers, which in one swoop created a nascent software industry that a half century later is the envy of the world.

Since governments can help make innovations fairer, what kind of interventions might the U.S. government make to reform the Web? First, it can support net neutrality. The policy helps sustain wider support for asking Amazon, ­Facebook, and Google to behave as “common carriers,” which must treat their vendors evenhandedly but also police their behavior, disallowing Web fraud in all forms.

The Language the Dark Web

Part of the mythology of the early Internet was that it was going to make the world a better place by giving voice to the masses and leveling playing fields. Light was the metaphor of choice. For example, Apple cofounder Steve Wozniak once said that “when the Internet first came, I thought it was just the beacon of freedom.”

You can easily make a case for how much “brighter” the world is now, thanks to ubiquitous connectivity shining a light on misbehavior and malfeasance, but the Internet has a dark side as well.

For example, when you enter a search term into Google and it spits out the results, you might think that the search engine spent those few milliseconds querying the entire Web. Nope, not even close. What Google indexes is a fraction of all the available Web, perhaps just 4 percent of the total, by some estimates. That indexed soupçon is called the surface Web, or sometimes the visible Web. What about the other 96 percent? That nonsearchable content is called the deep Web, dark Web, or sometimes the invisible Web. A related idea is dark social, those online social interactions that are not public and cannot be directly tracked or traced (such as text messages and emails).

Most of this hidden Web is obscured either because it resides within databases that are inaccessible to search crawlers (because they require that information be entered into an HTML form), or because those crawlers don’t have permission to access certain types of data (such as the personal info that people store within the cloud).

But a significant subset of the hidden Web is the online equivalent of caves, lairs, and dungeons where hackers, criminals, and trolls gather. I speak now of the aforementioned dark Web, the collection of websites where miscreants and malefactors go to buy or sell narcotics, weapons, and stolen goods (these are known as dark markets), where the desperate and the desperadoes hire hitmen and arsonists, and where trolls and dark-side hackers gather covertly in forums and chat rooms. This so-called darknet is hidden from view not because an HTML form is in the way but because it requires special tools to get there at all. The most common of these is Tor, a worldwide network of relays run by volunteers that anonymizes and encrypts traffic to the black Web’s typical .onion URLs (Tor is short for The Onion Router).The darknet seems like a place populated only by the lawless and the anarchic, but does it have anything to offer the rest of us? Consider, for example, that the Tor network itself has also received considerable U.S. government funding, in part to protect democratic movements in authoritarian regimes.