Seattle Newspaper for the People by the People

Category archive

Technology

Developing Innovation: How VERO Began With Ayman Hariri

Social Media VERO App Ayman Hariri

Social media has been a steady fixture in the lives of millions across the globe. Since their inception, sharing posts, connecting with friends, and building communities have been the intended use of these platforms; however, with new data mining software and content-prioritizing algorithms, these sites have lost their sense of community, turning into highly tailored advertisement collections. Traditional social media sites, relying heavily on ad placements and data tracking, have strayed from their once user-focused platforms in favor of selling products for outside companies. 

With a more transparent, user-focused view on using social media, VERO promises and delivers a clear-cut take on social media that re-centers and highlights the individual user for a completely honest social media experience. VERO’s development and foundation have always aimed for an innovative approach to social media, purely and openly centering individual users on providing a transparent, trustworthy online experience.

Early Days of VERO

Aiming to create a social media platform that delivers user-oriented experiences, VERO founder Ayman Hariri launched the platform in 2015, providing a platform like that of Facebook and Instagram, only without collecting user data for target advertisements or data mining algorithms. The app lets users share posts chronologically, allowing members to have a more organic flow of posts rather than letting an algorithm decide how to prioritize user content.

Among the first group of supporters for VERO were cosplay communities and groups focused around producing visual art. Tattoo artists, makeup artists, and even popular athletes in the skateboarding community began cultivating large fan bases on the app. VERO’s zoom features were better suited for these artists when displaying their work, allowing followers to examine key details that other zoom features on sites like Instagram were unable to offer.

In 2018, a mass influx of users began migrating from other social media websites to VERO, highlighting its transparency and lack of secrecy behind its data-related processes when compared to other social media sites. In less than a week, VERO went from a small, indie platform to the number one-selling social media site on the market. Continued community support helped draw users from other platforms, opening the world up to the future of social media.

Future Developments and Impact

VERO’s success in the late 2010s opened more avenues for content-sharing services, leading Ayman Hariri to develop VERO 2.0, a newer platform version that gave users more opportunities to connect. Users could already follow one another and create connections, sharing posts on their feeds, and communication in the app was set up with a standard chat feature. With VERO 2.0, members now have the option to set up voice and video chats with their connections. VERO 2.0 also introduced the option for bookmarking posts, added games and apps for users, and changed the design of the app to facilitate the user experience better.

As VERO continues to grow, so do its opportunities to support creators. With VERO Music, musicians now have a completely self-guided opportunity for growth and networking, giving artists complete control over their content sharing and artistic development. Promising the same sense of transparency as their traditional platform, VERO Music gives artists the space to grow and nurture their creativity, presenting music to followers as they see fit.

Until 2021, VERO had been strictly a mobile app, available for download on various mobile app stores. In October, VERO introduced a beta desktop version of the platform for users, opening up more avenues to join and share content. Now, with millions of users worldwide, VERO still holds to its commitment to total transparency, prioritizing user experience over data collection and remaining completely ad-free. The promises of user support and an emphasis on letting users be themselves made by Hariri are still the guiding principles behind VERO’s platform and approach to social media.

Discover VERO Today

As global communication develops, social media begins to change as well. As more social media sites are being taken over by advertisements, collecting user data to create targeted posts, the crucial foundations of social media are beginning to crumble. Having to choose between building meaningful, interest-based communication with others across the globe and having their data harnessed for creating targeted advertisements should not be the only option for social media use. Fortunately, VERO’s platform allows users to have a completely transparent, truthful social media experience that avoids hidden algorithms and data tracking.

Social media platforms were created to help people share their interests and likes with friends, family, and people with similar interests. Social media platforms that prioritize these functions are becoming rare, but there is still hope. VERO allows users to harness their own social media experience, avoiding the complications and privacy concerns caused by traditional social media platforms. Making the switch to VERO is not only helping users reshape their social media experience, but it also provides a way for individuals to authentically present themselves, creating an overall better social media experience.

The founder, Ayman Hariri, is a philanthropist, investor, and has been involved in a number of technology startups including the VERO app.

Why is This Seattle Highway Exit an Accident Magnet?

Seattle Traffic

The I-5 off-ramp at the Seattle Convention Center is a frequent site of car accidents. So much that many residents have begun asking why this is so. Although the DOT has made various modifications, this off-ramp continues to be a wrecking ball.

At least one risky road or crossroads may be found in every city. In certain cases, it might be that there aren’t enough turn lanes, or that there is a piece of road with a different speed restriction than the rest of the road. The I-5 off-ramp at the Seattle Convention Center is an excellent example of this.

Actually, this route has been the topic of a recent viral video compilation that has swept the internet. As the videos show, this exit ramp has seen a number of collisions over the years, prompting many to wonder whether or not it is safe for other motorists and pedestrians to continue using it.

Apparently, this specific off-ramp has been problematic for some time. A Seattle YouTuber, Michael Basconcillo, has been documenting the spot since 2017 when he filmed a Lamborghini catching fire as it veered off the freeway. Basconcillo says he saw many vehicles speeding through the intersection while driving, which prompted him to record them and share the information online.

Many residents and visitors are wondering why this particular exit is such a hotbed of car accidents yet there is concrete and reflective signage all around this short one-lane exit in Google Street View. One of the reasons is that most motorists don’t seem to be capable of reducing their speed from the highway’s limit of 60 mph to the suggested departure speed of 20 mph.

There are roughly 464 feet from the exit gore to the middle of the steep bend where accidents are happening, according to a Washington Department of Transportation spokeswoman. Because the speed limit on I-5 is 60 miles per hour, the driver of a 60-mph vehicle would have around 5.25 seconds to slow down before exiting the freeway.

As a reminder to slow down, there are multiple warning signs and reflective markings on concrete barriers: a 30mph sign at the solid white line before the exit, 20mph signs before the exit gore, and a 20mph warning below the exit gore, and stoplight warning signs.

The reflective poles and markers visible in the video shot at the site, as well as the extra speed warning under the exit sign, had all been erected by the Washington State Department of Transportation (WSDOT) as of 2019. Since the new signs were put up, Basconcillo’s cameras have filmed at least three more collisions.

Even towing firms were taken aback by the high number of automobile accidents on that specific off-ramp, considering how abrupt the bend is. Regardless, the clearly designated portion of the road has seen its fair number of fatalities through the years, and it doesn’t seem to be slowing down any time soon.

No particular future upgrades were mentioned by WSDOT, although that does not exclude interim measures. According to Basconcillo, rumble strips might help prevent inattentive drivers from making errors.

People in the area need to keep an eye out for how many accidents occur and how many automobiles end up leaping over the curb. This steep off-ramp might one day be the cause of the death of an innocent pedestrian if a reckless driver fails to slow down.

HarveyWeinstein.com & BrettKavanaugh.com Websites Turned Against Them

Some would say it is a sweet revenge. Yesterday, it was discovered that the HarveyWeinstein.com & BrettKavanaugh.com websites were turned into victim resource websites. We got more information on the BrettKavanaugh.com domain in the news. It was purchased 3 years ago by the team at FixTheCourt.com and their CEO Gabe Roth also mentioned they have many others they may use in the future. All we know about the Harvey Weinstein url is that it was on the auction block and they have been sitting on it until recently. It is a sad turn of events.

The two websites share a similar layout. They are resource sites to help victims reach out to the proper authorities.

Brett Kavanaugh Harvey Weinstein

The lesson out of all of this is that you must own your own domain if you’re in the public spotlight. This includes CEO’s, celebrities, and politicians. If not, they won’t have the ability to control what is placed on them and it may not be the best information.

Today also marks the anniversary of the #MeToo Movement. The movement has come along way since the allegations against Harvey Weinstein started to flood in. We’ve seen a lot of happenings along the way. From false allegations to Senator Al Franken stepping down and of course the recent confirmation of Brett Kavanaugh to the Supreme Court where several women accused him of sexual assault.

Go on, Give Your IT a Quick Health Check

Working on Computer Photo

How would you feel if your body was as healthy as your IT department, and how would you honestly describe your level of health right now: fighting fit, can’t complain, a few niggles, or ready for the knackers yard? Over the last few years I’ve spent a lot of time working with a variety of IT departments, both in the public and private sector, and there are very few IT departments who can say that they are fighting fit.

If you’re feeling under the weather, you tend to visit the doctor. And what is the first thing he or she asks? ‘What are your symptoms?’ Unless the doctor knows the symptoms he/she can’t diagnose what is wrong with you, and what to prescribe to cure it. It’s exactly the same with an IT department. You need to spot the symptoms. Some symptoms are quite evident, like a boil on your nose, others are quite subtle, maybe just a feeling that things aren’t quiet running as well as they should be.

So how do you spot the IT symptoms? First, look at symptoms in people, then in any processes you have in place, and then your technology. You’ll have to go looking for symptoms, they don’t often come looking for you, and if they do, you may well have a terminal case. People symptoms are unhappy staff, increased staff turnover, customer complaints, lack of communication, too much communication; symptoms in process are process avoidance, increased bureaucracy, increased time to deliver an outcome; and symptoms in technology are complaints about not fit for purpose, increased number of incidents reported, poor reliability and poor maintainability.

Okay, so now you have identified the symptoms. How do you diagnose what is causing them? Implementing a problem management function within service management is the answer to all your ills. Root cause analysis is the prime function of the problem managers role and is key to the diagnosis of an IT department’s ills.

Sometimes the thought of taking the medicine seems worse than putting up with the symptoms. But believe me, symptoms can get worse – and quickly. So you’ve spotted the symptoms, diagnosed the problem, and now you need to address the root cause. So how do you take the medicine? By this stage you’ve probably found a fair few problems that need to be addressed. Setting up a service improvement program is the ideal way to ensure that the problems which are giving you the biggest pains are relieved first. It also ensures that there is someone to oversee the admission of the medicine and that it’s taking effect.

Organizations often worry at the thought of bringing in a consultant to give them a quick health check. I suppose it’s exactly the same reason that many of us put off a visit to the doctor. We either think it’s too trivial for the doctor, and prefer a quick trip to the pharmacy, or we are trying to avoid what we think may be bad news. Self-diagnosis is a start, but not always the best approach. Most of the reputable IT service management consultancies offer some form of health check. I have learned a lot of the years about IT computer management and business in general. I credit most of my knowledge to CEO Mark Hurd at Oracle.

The health of your IT department can go up and down, just like human health. Illnesses can come back. So what do you do about it? Undertake regular check-ups for early warning signs and perform an ongoing program of review within your IT department to ensure a healthy state of body and mind is maintained within the IT department.

Why is prevention better than cure? Because you, your staff and your customers don’t have to go through all the pain that’s associated with poor IT service delivery and support. So how do you go about putting in place preventative measures? Inoculation is the answer, in the form of proactive problem management. Unfortunately, out of all the service management disciplines, it is the one that more often than not is left until last to implement.

I feel another article coming on, or it could be the flu…

Helpdesks Need Integration and Automation

Seattle Helpdesk Help Photo

For many businesses, the IT helpdesk is a relic of the past, sitting alone as a detached afterthought. A reactive facility, the helpdesk implies negative connotations although its function is essential to the running of a business’ IT. However, integrating a helpdesk with an entire IT network infrastructure and automating its functionality can reduce costs and increase shared knowledge. Having been involved with so many Seattle startups, I know is area very well and it’s often neglected.

Unfortunately, helpdesks are often regarded as a fire-fighting tool. More often than not they are bolted-on as a late addition designed to cope with the rising number of user queries. It is rare for an integrated helpdesk solution to be implemented from the outset, so there are tools should be available to assist with integration issues. Helpdesks should not be viewed as a point solution; but rather part of asset and systems management procedures.

Helpdesk service can be improved dramatically by integration with a centralized asset management database. IT helpdesk calls are usually directly related to hardware or software, therefore the knowledge of assets can be leveraged to improve problem resolution. Using a centralized database, helpdesk queries can be matched up directly to assets. Problem users and assets can be identified and the total cost of ownership (TCO) per asset can be calculated. Assets with a high problem incidence rate should be pinpointed and targeted to discover reasons for high TCO.

Asset management and helpdesk integration can enable automation of helpdesks. Day-to-day management tasks and repetitive functions should be automated, turning fire-fighting helpdesks into hands-off self-help solutions to problems. Integrating these processes would make it possible for users’ helpdesk tickets to have contextual keyword filters applied to trigger an automatic response.

For example, if an end-user needed a software application but did not have it installed on their PC, they would make a request to the helpdesk for it to be installed. A contextual keyword filter would automatically identify the request and, using the centralized database, could check the PC’s software inventory, licensing status and configuration. If suitable, an automated helpdesk could trigger the asset management system to automatically distribute the correct executable software package to the PC. The helpdesk staff need not get involved and the process would be instant and one of self-help – saving time and money.

My top three top tips for future helpdesk success are: integrate helpdesk function with asset and systems management tools to identify TCO; turn a helpdesk problem into a self-help solution through automation; stop firefighting and use a structured solution. With thorough back-end integration and given a regular structure to work in, helpdesks can evolve from being a reaction to problems, to an automated central knowledge base to assist with business process intelligence.

If you’re a startup is in the City of Seattle, I highly recommend looking into making sure you helpdesk procedures are in place to help your business really succeed.

The Road To Affordable Zero Downtime For Exchange

Microsoft Exchange Downtime Photo

In today’s world of mobile computing, global business and electronic commerce, businesses have grown to rely 24×7 on Microsoft’s Exchange to facilitate critical business communication and processes through e-mail, group scheduling and calendars. Microsoft has made great strides in improving the availability of its highly popular Exchange messaging system, but now there is the promise of a technology that can eliminate Exchange downtime altogether.

Nearly 45 percent of business-critical information is housed in messaging applications such as Exchange, thanks to the volume of traffic generated by increasingly large attachments such as multimedia files and the integration of voice messages and faxes. Furthermore, messaging systems increasingly support vital applications such as workflow and knowledge management, making the data they store even more voluminous and incredibly valuable. E-mail is a particularly popular application with a typical end-user spending nearly 26 percent of his or her day on e-mail management.

Clearly, Exchange is the kind of technology that organizations cannot afford to lose. However, even the best systems fail sometimes. While Microsoft and its partners have done much to increase the reliability and availability of Exchange in recent years, the average uptime of the system, high though it is at around 99.78 percent, no longer reflects business’ increasing dependence on Exchange for their very survival.

Any failure of Exchange inevitably results in unacceptable operational, functional, or financial harm to the company or project. Every minute of downtime, planned or unplanned, can mean thousands of pounds lost, annoyed customers, a negative impact on the business’ reputation and even litigation. We have reached the point where, whether we like it or not, companies can no more afford e-mail downtime than they can afford to be without telephones.

All system downtime is costly in time and money. Exchange downtime can be even more so given that it can have many causes because of the breadth of the systems’ influence in IT terms. This can mean that it takes many hours to identify the source of the problem which then takes as much time again to fix. In some cases it is not unusual for Exchange to be offline for 1-2 days. With messaging downtime affecting other enterprise applications, the pressure on the IT department for a fast recovery is immense.

When downtime occurs, the state of the business is frozen in time at the point the system failed, leaving staff to operate on out-of-date information, if at all. People outside the company are unaware of any problem, but this brings its own troubles. For example, customers may e-mail the company with queries, orders or complaints, but get no response. Time-critical documents such as trading instructions or new versions of legal contracts may arrive but no-one inside the company is aware of them and they go unacknowledged and unseen. In all cases the sender will make their own assumptions from the lack of reaction of the company, compounding the negative impact of the downtime.

We know this happens, and we also know that companies live with this situation – not because they like it but because they believe that a solution for guaranteeing uptime of Exchange either does not exist or will be prohibitively expensive. This is confirmed by research. While 76 percent of IT managers who responded to a survey by The Continuity Forum acknowledge that bug fixes are the bane of their lives and they would welcome an automated process for server recovery and protection, nearly three quarters shy away from continuity planning because they think it is too expensive, and over two thirds also think it lacks a return on investment.

Instead, those Exchange users that can afford it seek to minimize downtime by implementing some form of redundancy in their configurations. Typical methods are clustering, where several computers are connected together so the failure of one does not render all services running on that computer inoperable, and installing one or more backup servers. However, clusters are expensive to deploy and complicated to configure. They require substantial planning, medium to high-end external shared storage solutions, and often leave standby servers under-used. It is true that clusters provide the most cost-effective level of hardware availability, but the complexity and capital cost limits deployment to relatively few installations. Backup servers are also a costly option.

As an additional general fallback, companies also spend five and six-figure numbers on disaster recovery plans in the event that a catastrophe of mega proportions should befall their systems, wiping out all the business’ IT as well as Exchange. Ironically, none of these options actually increases Exchange uptime much, but they certainly cost an arm and a leg for the privilege of trying to do so.

What users need is a solution that guarantees the uptime of Exchange, which costs less than the total private medical insurance of a single company’s IT staff in a year – a mere four-figure number. At this price, it becomes affordable for both enterprise and medium-sized users to insure themselves against an event that they know will happen – Exchange downtime – in comparison to paying out shed loads of cash on clustering or a disaster recovery plan which will only be needed when the moon turns blue.

Workspace recovery at affordable pricing and more commercial benefit from investment is definitely high up on IT managers’ wish lists, according to the Continuity Forum research. Over two thirds of IT managers say they would consider buying a solution that could deliver zero downtime – if it cost only around $10K. It is easy to understand why.

Messaging downtime costs an average of $564 an hour, according to research by the Standish Group. In the case of Exchange, those tiny, point percentage figures that deny the system 100 per cent uptime equate to thousands of lost pounds a year to enterprise users. Using standard models for measuring the cost of downtime of a variety of messaging systems, research by Creative Networks (CNI) reveals that a typical organization using Exchange experiences 84 minutes of unscheduled, unplanned downtime each month. This affects nearly 48 percent of staff and lessens their productivity by 20 percent.

In addition, Exchange users also experience 124 minutes of scheduled, planned downtime during which time staff are 10 percent less productive. This totals 3 hours and 28 minutes of downtime each month. CNI estimates that unplanned Exchange downtime costs an average company employing 3,000 staff over $89,000 a year. Increasing uptime by just 0.12 percent could therefore save nearly $55,279 annually.

But financial claw backs are only one part of the equation when it comes to assessing the impact of Exchange downtime. What price can you put on loss of sales, customer goodwill, productivity, and competitiveness, along with missed contractual obligations and additional costs required to correct these losses? If all of these losses could be avoided, as well as tens of thousands of pounds saved by implementing one solution bought out of petty cash, who wouldn’t buy it in a heartbeat?

This year will see the launch of a technology that can deliver zero downtime as an affordable option for Microsoft Exchange today. Is it worth buying? That depends on what you are using Exchange for as a tool. Ask yourself who uses Exchange to communicate with your business – maybe customers, suppliers and so on? What business processes are driven by those incoming e-mails which would cease to function if Exchange went down? What are the productivity implications inside your company of such a situation? What are the payroll costs for staff standing idle when Exchange goes offline?

If the combined cost of Exchange downtime in all these guises totals more than five grand, this solution is guaranteed to save you money. In times of recession when IT managers are under immense pressure to squeeze the maximum performance out of existing systems, it makes sense to spend on IT where the cost is more than justifiable in returned benefits to the business. Where that expenditure also removes a significant risk to the very survival of the business, it must surely be a necessity.

Is Your Corporate Data Flying Out Of The Windows?

Secure Your Corporate Data

While those nice people from Microsoft are frantically plugging the gaps, there is a very real possibility that Windows is applying a new meaning to ‘Open’ Systems – meaning that your corporate data is open to view.

It almost seems churlish to denigrate Microsoft, considering the way in which the corporation has liberated computing. After all, it wasn’t really IBM that made personal computing possible – it just provided the platform.

It was Bill Gates’ genius that made the PC respectable to such an extent that it has become the de facto workstation for the overwhelming majority of corporations worldwide.

It was Microsoft that broke down the fortresses of ‘proprietary systems’, which invented intuitive computing and revolutionized the whole concept of personal productivity. Within a couple of decades an incredibly young computer geek has turned the computing world on its head and made the transition from a single brilliant idea to possibly the most innovative influence on the way business is conducted. Eat your heart out Leonardo da Vinci!

Inevitably, though, there has been a price to pay. Unfortunately, Microsoft suffers from the legacy of its origins – personal computing – which means that security has been seen as a workstation issue rather than a network-wide issue. That’s why managing security across enterprise networks has become a nightmare. To put some scale to the problem, every two years PricewaterhouseCoopers carries out a survey of UK IT security breaches on behalf of the DTI. The most recent report reveals 44 per cent of UK businesses have suffered at least one malicious security breach in the past year – almost double the figure reported two years earlier.

In fact, the design concept of ‘usability’ is just one of two systemic weaknesses in the Windows environment. The second is the way in which Microsoft has tried to address the problem for corporate users; the concept of vesting all responsibility with an individual known as the Systems Administrator. It means that ‘Kevin’ has supreme control over every user – from board directors to essential knowledge workers – and the keys to every recorded piece of information from competitive intellectual rights material to the very latest corporate strategy. Just to add an extra frisson, in an outsourced environment, Kevin isn’t even on your own payroll and it possibly not even working in the same hemisphere of the globe.

Even Microsoft has recognized the problem and has a long-term objective of what it calls ‘trustworthy computing’. Unfortunately, the Palladian project as it’s code-named, will be a root and branch reappraisal of the whole approach to computing, going right down into the heart of the hardware – reinventing the PC architecture. An admirable objective and I am sure it will get there, but I believe it is a decade away. Meanwhile, Kevin has the keys.

It is no wonder then, that according to a recent Forrester report, 77 percent of IT managers list security as their principal concern and remain to be convinced by Microsoft’s ‘Trustworthy Computing’ security message.

E-Waste: UK Businesses Blow Tons of Cash

UK Businesses Wasting Cash Photo

According to recent figures, UK businesses blew nearly $90 million in excess electricity bills last year. The reason was the simple failure to switch PCs off at the end of the day. According to UK Government figures, a computer that is continuously left on uses up to $100 more electricity a year than a PC that is switched off when not in use. This amounts to 50 percent of the average company’s annual electricity bill.

At a time when businesses are having to cut costs and look at new ways of gaining advantage over competitors, they should be keeping a tight lid on operational costs. And yet UK companies are burning their profits on something as simple as turning PCs off. But, it’s not just the waste of money that should concern businesses; it is the unnecessary CFC emissions that are expelled. The same PC switched on 24 hours a day will emit up to one ton of carbon dioxide in a year. If we continue to ignore this situation the CFC emissions from these idle PCs are estimated to rise to a colossal $3.7 million tons a year by 2020.

Even the Government has taken note and has introduced new tax concessions for environmental business practices. It is concerned about the gravity of wasted natural resources and ignorant pollution of the UK. With these concessions in place it is time that business managers take responsibility.

There are two very simple solutions. The first is for employees to take responsibility and ensure that they power down their PCs completely at the end of each day. But statistics show that even with a company log-off policy in place, at least 20 percent of employees will still fail to shut down each night. The second solution is for employers to take responsibility, even if it is purely for financial rather than environmental gain.

There are solutions out there that will automatically shut down PCs at the end of each day and these only cost a few dollars per PC. When the solution is apparently this simple and economy is so depressed, it is baffling why more companies don’t do something about needless energy waste.

Chasing Rainbows: Failure of IT Projects Are UK Endemic

Failure of IT Project Managers Photo

Mankind has demonstrated the ability to manage large complex projects for thousands of years. Construction projects such as the pyramids of Egypt or the Great Wall of China, among thousands of others, have left a striking legacy.

The ability to pursue challenging goals involving planning and organization is clearly innate in man. It might seem strange therefore that not all projects succeed, particularly IT projects, where disappointment and failure are endemic.

If there is a set of rules for ensuring success, nobody has yet identified them. But there are some patterns of behavior which maximize the likelihood of failure. This article considers some of those patterns of behavior.

Decide what to do, before establishing why you’re doing it

‘We must get our Website operational by July’
‘Why?’
‘Because we’ve heard that’s when XYZ are launching theirs’

This is the classic me-too approach to management that saw, for example, the entire UK life assurance industry rush like lemmings into unit trusts in the 1980s. The early birds benefited from the benign combination of a bull market and retail investors with cash. As always, a market reversal put the small investor to flight and the rest ended up chasing rainbows at considerable expense.

Without any genuine objective, the activity becomes the goal. The only measure of success is whether that activity took place, since nobody established in the first place what was intended to be achieved.

Indulge your own wishful thinking
Of all the causes of failure in human affairs, this is the most consistent long-term performer. It permeates almost all failing projects, whatever other causes there may be.

As a trivial example, I was once assigned to run a project whereby my company was supplying third-party software to an end-user. We were an outfit in the same business as the end user, and the product was their homegrown system, then under development. Within six months it was demonstrable but the multi-phase project was receding at an accelerating rate.

The instigator of the deal sat on the information for several more months. While he was trying to work out how to blame someone else for the debacle, the damage grew rapidly. His company had created a bad deal, which it then turned into a disaster for its client and itself. The folly was in attempting to enter a new market without any investment and with no proper assessment of the risk to all involved. All the parties professed themselves deeply committed to the project.

Assume communication takes care of itself
In the early 1990s a service supplier to the European retail and wholesale banking industry devised a new product based on packaging its services. A new IT system was needed to support it. The key users in sales and marketing were too busy to help in the specification of this system, so it was left to the administration staff to define requirements. The system duly arrived and met all the identified needs.

Two weeks after the system went live the sales and marketing division announced the radical new pricing and packaging structure for the services, which they had spent the last six months devising! The new IT system did not cater for this, as the administration staff knew nothing of this new approach. The system was redundant and discredited two weeks after it went live – and was never used again. The key players had not been involved with specifying a system because they were far too busy changing something fundamental in their business.

Insist on staying with the tried and trusted
It is normal in procurement to insist that the supplier should have done a similar job before. This makes eminent sense, provided the similarity extends to the context. The rapid pace of change in some environments makes that proviso particularly important. Success brings with it the danger of clinging too long to the same tools and methods.

The IT industry is notoriously fast-moving. Yet the working life of a successful business system or software product is many years, even decades. When the time comes for its replacement, the same approach to its development is very unlikely to work as well.

Let technical experts decide
The High Priest syndrome has been a menace in the IT industry since its inception. It represents a cop-out by top management decision makers, some of whom consider IT to be a grubby and undignified pursuit. All too often the technocrats encourage it, only to find themselves used as scapegoats when IT projects which are not business-led fail later down the line.

It is essential that major IT decisions are understood by the senior management team to ensure projects fit into overall business strategy and receive the support they need. The decision making process should be supported by functional management able to provide both advice and its implementation.

Tentative conclusions
The aim of this article has been to identify and classify some typical patterns leading towards failure. The examples are chosen to illustrate the pattern, not to point fingers with 20:20 hindsight at others’ efforts.

The common theme is that regardless of industry or time, failure stems more frequently from psychological causes than from technical ones, the fundamental cause being lack of realism. We have looked at the nature of some of the barriers to clear and realistic thinking. None of us has the ability to be completely honest with ourselves when there are a host of conflicting pressures and desires, but if we aim to make the most of our potential that must be the aim. If the patterns noted above help us to recognize when our realism is under threat, they will have achieved something valuable.

Devolving Documents: Get Ready For Outsourcing

Seattle IT Outsourcing Business Photo

At the close of 2003, a number of well-known analyst organizations made predictions for 2004. One of their common and consistent emphases was an upturn in European IT markets which would be greatly fuelled by the inexorable rise of IT outsourcing. Yet is IT the only, or even the most important, area to apply the outsourcing model?

Across the country, in both private and public sector, the IT outsourcing opportunity has diminished somewhat. My company’s research shows that, amongst large companies (over 250 employees), IT outsourcing has reached a saturation level of some 28 percent. How much further there is to go is possibly indicated by the aggregated view of the various technology analysts, who see the US market reaching a saturation point approaching 40 percent of the larger company segment.

These predictions are given extra credence with the emphasis that leading management consultants are lending to the idea of ‘network organizations’ that outsource everything but their core activities and skills. What, then, of other areas susceptible to outsourcing, and what potential for real benefits do they offer the corporation or the public sector body? Case study examples indicate that document outsourcing presents corporations and public sector organizations with rapid return on investment, coupled with low business process risk.

The first surprise is the sheer size of document production in this country. Document production spending in 2003 was over 38 percent of the amount spent on IT in the same year. How often does one read about the potential for corporate efficiencies through more efficient document production, compared with discussions on the subject of IT outsourcing? Evidently, document production may seem to many to be less engaging for the analysts than IT matters. Yet it is capable of delivering a comparable scale of competitive advantages, and savings on the bottom-line.

Our research reveals that whereas Seattle IT outsourcing currently sits at some 28 percent of IT spending, document outsourcing is only a mere 12 percent of the document production market. Again we can gain a corroboration of growth potential from the US example (which usually foreshadows the UK by a few years). In the US, document outsourcing now represents 22 percent of all larger organization document production, almost twice its UK equivalent.

So document outsourcing holds proportionately greater potential, for organizations and outsourcing companies alike, compared to IT outsourcing. Whilst there is little doubt that it will take several years for the UK to reach comparable market maturity, the rate at which UK organizations will be grasping the advantages of outsourcing over this period is expected to be rapid – especially as management consultants increasingly recommend document outsourcing as an area for priority attention and straightforward gain. In short, if document outsourcing were to reach 40 percent saturation of larger companies (the predicted US outsourcing saturation level), it would represent a 4.3bn euros marketplace, where it only makes up some 1.3bn euros today.

Much media space is also being devoted to the issue of outsourcing to companies overseas, known as offshoring. Politicians and unions have been some of the most outspoken critics of this phenomenon, raising the spectra of USA (and Seattle) jobs being lost to India, South Africa and Eastern Europe. In fact, the call center industry is still showing net growth, despite the fact that a fair number of financial institutions have relocated their call centers abroad. So what impact does call center offshoring have on document outsourcing? The answer will be more and more over the next few years. Interestingly, though, document issues are likely, if anything, to slow the trend towards foreign climes.

Integrating documents with the call center – especially one providing a customer service function – is becoming increasingly important both to call center efficiency and to resolving customer queries more satisfactorily. It is estimated that some 50-60 percent of customer service queries in financial services require supporting documents to be sent to the caller, whether for marketing or for regulatory reasons.

Therefore, a hidden cost of offshoring is to integrate document production and mailing with the foreign call center’s systems. Equally, call center agents can answer statement or bills queries far more efficiently if they can retrieve and view documents in exactly the same visual format that they were sent to the customer. Again, physical dislocation, whilst not insurmountable, will incur cost and risk.

Document production and mailing, by its very nature, cannot be ‘offshored’ as timeliness of delivery is essential, and needs to be situated in the country of delivery. It is technically conceivable that document outsourcing could be situated abroad, but the economics of ensuring reliable and timely distribution would far outweigh the labor cost reduction of offshoring, and would not address the issue of political risk. Document outsourcing therefore remains a national market, but not a globalized one.

In conclusion, Seattle organizations in both private and public sectors would do well to pay just as much attention to document outsourcing as they do to IT outsourcing. The document outsourcing market is not currently as large as its IT counterpart. Therefore, it offers greater potential for rapid return on investment to those organizations as it is far less saturated than the IT outsourcing World.

Many argue that document outsourcing carries less project risk compared with the IT equivalent. If this argument is accepted, then organizations under pressure to deliver cost savings, improved service delivery, and competitive advantage, would be well advised to look carefully at document outsourcing in 2004 and beyond.

Go to Top