The marketing automation industry has snowballed over the past few years, and yet it’s only just getting started.
According to Marketing Automation Insider, in the past five years alone, we’ve seen over $5.5B worth of acquisitions made, and an aggregate vendor revenues increase from $225M to $1.65B!
At the current rate of adoption and innovation, you may be wondering where marketing automation is headed, which trends are emerging, and how it will all benefit your business. Well, my friend, you’ve come to the right place! Here, we’ll explore three major shifts in the marketing automation industry and how they’ll impact your businesses on both a macro and micro-level. Let’s get started…
1. Predictive analytics is making guessing a game of the past
One of the major trends emerging in the marketing automation industry is the use of predictive analytics and machine learning to power sales and marketing decisions.
Predictive analytics uses clever statistical models to identify what your customers will likely do next and then automatically uses those insights to trigger certain actions.
In 2012, Amazon filed a patent for a predictive analytics system that would allow them to begin shipping products before a customer even ordered them. (Crazy thought, right?!) But by predicting the probability of someone buying a product based on their behavior on the website and previous history, Amazon could reduce its shipping times and move product faster.
There are countless uses for this type of technology, but in the context of marketing automation, it provides the opportunity to eliminate guess work.
How long should you wait between sending two emails in a lead nurturing sequence? Which content (blogs, ebooks, etc) should you send to a certain type of lead? How should you score a lead from a particular marketing channel?
While experience tends to provide good answers to these questions, predictive analytics can provide dynamic answers that, like a good bottle of wine, become better over time, ultimately surpassing that of an experienced marketer’s hunch.
As a result of this, marketing and sales will become less of a guessing game. The most important consequence of this is that companies using predictive analytics will have the competitive advantage.
Predictive analytics adoption is still in its infancy, providing the innovative early birds with a great opportunity to get a head start. The question is, will it be you or your competitors that gain this competitive advantage?
2. Intelligent multi-channel marketing is becoming the norm
Tests have shown that when you target a customer both in their inbox with an email and on Facebook with a matching ad, the customer is 22% more likely to purchase, as opposed to if you had only sent the email.
In multi-channel marketing, the whole is usually greater than the sum of its parts. This is especially true when you add a layer of personalization into the mix, which is of course possible with marketing automation.
Let’s look at time for an example. Let’s imagine that you’re the marketing director of a company that sells kitchen appliances online.
A prospect named Molly visits your website and adds a fridge to her shopping cart (obviously a very large shopping cart.) Molly enters her details to check out but never completes the transaction.
In this situation your multi-channel strategy could look something like this:
- After 30 minutes, an automated email is triggered encouraging Molly to complete her order (in case she got distracted by a phone call…or Game of Thrones).
- Using retargeting through a sync with your marketing automation platform and Facebook, Molly sees an ad saying “Molly, are you refurbishing your kitchen?” and linking to a separate landing page. If this landing page is engaged with, Molly would be entered into a whole new sequence with upsells and offers incentivizing her to buy more items.
- If after one day Molly still hasn’t bought the fridge, a text message could be sent asking if there’s anything your support team could answer or help her with.
- If after several days later there’s still no purchase, a separate email and Facebook ad campaign could be triggered targeting Molly with different fridges and freezers relating to her original search criteria.
This type of multi-channel nurturing is immensely effective for a number of reasons:
- It’s underused. Despite being very effective, few companies are running such hyper-personalized campaigns. This will likely change over the next few years, as more and more companies realize its effectiveness.
- With more channels, you can capture more data from your customers, leading to more relevant targeting. The more relevant your targeting, the more likely a conversion will be.
- It makes it virtually impossible to lose a customer due to distraction, as you’re able to communicate with the buyer across devices and platforms and at different times.
We’re likely to see a lot more multi-channel marketing over the next year or two. As marketing automation tools improve their offerings and features, and as more case studies emerge, more and more businesses will begin to use this powerful tactic.
Could this sort of multi-channel marketing help your business convert more leads into customers?
3. Marketing automation is becoming widely adopted
There are two colliding waves in the marketing automation industry that are converging to form a tsunami-like surge of businesses interested in marketing automation. These waves are 1) the increased awareness of the value of marketing automation and 2) the increasing impact and capabilities of marketing automation software.
Since 2012, the amount of case studies, articles, webinars, and events covering marketing automation has exploded, resulting in a heightened awareness of the impact of adopting a marketing automation solution.
Simultaneously, marketing automation software is becoming more and more powerful. With new features and functionality, such as real-time personalization and adtech-geared tools, as well as an increasing pool of experts readily sharing best practices, the scope of what marketing automation software can achieve is continuing to expand. Word—and excitement—is spreading like wildfire.
These two converging waves have an obvious result: more and more businesses (your competitors included) are implementing a marketing automation solution. As such, it pays to get ahead of the game and start building your marketing automation campaigns early. That way, by the time your competitors are building out their first campaigns, your business could have hundreds of active, fine-tuned campaigns already working on leads. Get to it!
Conclusion
As far as we can tell from the data, marketing automation is becoming smarter, more tailored, and more accessible. The combination of these trends explains why the industry has grown so rapidly over the past five years and why it does not appear to be slowing down anytime soon.
For business owners, the lesson here is simple and Darwinian: evolve now (aka get automating), or risk losing your market share to your competitors.
Category: Digital Disruption
Don’t expect straight answers on data sharing from the firms that profit from it
From The Conversation.
Data is a new currency of sorts: we all generate a lot of it, and many companies already use it to serve their ends or ours. But, for many very good reasons, it’s not easy to persuade people that they should give their data away. There are more than enough surveillance scandals or data breaches to make an open approach seem like a bad idea.
A study commissioned by the Digital Catapult, a working group bringing together academics and industry – and conducted by a credit checking firm – concludes that consumers want new services that will allow them to collect and manage personal data, and that paying people to share their personal data creates new business opportunities.
The report suggests that people are eager to use their own personal data as a means to earn money: a majority of respondents (62%) said that they would be willing to receive £30 per month to share their data. But respondents could only choose between £2 and £30 per month – there was no option to opt-out and share nothing at all. Of course, if you’re facing a choice where you can earn something or nothing for your data that will be hoovered up anyway whether you like it or not, it’s hardly surprising that people choose the maximum available. It feels as if the rest who answered otherwise just didn’t understand the question.
So, it doesn’t mean personal data is worth £30 per month, and nor does it stand up the report’s claims of “£15 billion of untapped wealth for UK consumers”. This value is arbitrary, plucked out of the air and relative only to the limited choices people were offered. No doubt they’d not turn down £100 or £1,000 either.
The value of data, and to whom
The research I’ve carried out suggests that it’s hard to put a monetary value on personal data sharing – in some circumstances it is possible to estimate how much people are willing to pay to keep the data secure. The difficulty with assessing how much data is worth is because one person’s data tends only to gain value when it is aggregated with other people’s data. This makes it hard, if not impossible, to decide what a single person’s data is really worth – even if some attempts have been made to find a market price.
In fact many people don’t really understand what their personal data is, how it is stored and used – and this is something the report backs up. Considering these difficulties, then, is it ethical to use cash incentives to persuade people to hand over their personal data? Do those doing so understand the potential implications, and should those offering the cash be required to explain?
Some stand to profit more than others
Let’s not ignore the fact that the study was carried out by a credit checking firm. The firms’ business is based on gathering considerable amounts of personal information from various sources and then selling it on to others, including the same organisations from which they drew the data in the first place and those people who have become records in their systems. They have access to many data sources that describe our lives, such as banking records, our home address, bills with various utility companies and the like. However, there are many other digital pieces of information about us which yet are not shared with credit checking companies.
Perhaps it’s no surprise that this report justifies the introduction of services that will allow third parties to collect and manage more of personal data that we produce. For example, a firm might launch a personal data management app that collates mobile phone use, GPS records, loyalty card data, health data from fitness apps and your NHS records, alongside charitable donations, social network data and productivity apps. Then it could offer the opportunity to share some or all of that data with them for a fee, paying you, for example, £30 a month, at which point almost everything about you, from work, physical health, habits and social groups, could be discovered and triangulated, and used by them for their financial benefit.
Risk vs benefits
While the report suggests such services might benefit the public good, the question is of who holds the reins: government, business, or the third sector? Having your data will directly benefit those in control of it, and those they sell or share it with – but would it benefit you?
Time will show whether this degree of data integration is beneficial at all. But given that people understand so little about how their personal data is created, gathered and shared – and the implications of all this – it seems simplistic to offer cash incentives for people to share the information that describes their lives in detail. That’s without even considering the many risks posed by collating so much information on individuals and storing it together – as tales of massive data breaches, losses, and abuses continue to remind us.
Author:
Research Fellow, Horizon Digital Economy Research Institute at University of NottinghamMobile Payments: The Delay of Instant Gratification
Good article from strategy+business, on the fact that platforms like Apple Pay and Google Wallet will need to ensure a seamless and secure experience for merchants and consumers. It once again highlights the upcoming payment wars.
During the next few years, many competitors, from both financial services and the hardware and software industries, will jockey for control of the sector. Payments for retail purchases through smartphone apps still represent a tiny fraction of transactions for the $2 trillion worth of goods and services that pass through retail establishments and banks each year in the United States; still, by 2018 digital wallet transactions will likely grow to represent about 6 percent of total card transactions — the majority being small-ticket purchases made online or within apps. This figure may sound small, but it’s a significant shift: Few would argue that e-commerce isn’t mainstream, yet Internet sales represented only 6 to 7 percent of all retail sales in the United States in 2014.
Just as people tend to compartmentalize their use of credit cards — one card for daily purchases, another for big purchases, and several for specialty retail — they are likely to use different mobile payment apps for different brands and different types of transactions. But only a few general-purpose branded e-wallets are likely to be left standing when the industry shakes out; that’s the nature of shared platforms. The companies that ultimately control the mobile payments platform may be technology companies or banks or retailer consortiums, or a combination of all three. The winners will be those platforms that offer five critical elements:
1. Merchant acceptance. Apple Pay is accepted at more than 700,000 merchant locations, but that number is less than 10 percent of the 8 million to 10 million merchant locations in the United States. This is significant: Consumers are less likely to use a credit card that’s not accepted everywhere. As a point of contrast to mobile adoption, Visa and MasterCard’s traditional plastic cards are accepted at 99 percent of merchant locations.
2. Interoperability. In its current version, Apple Pay does not support all cards or merchants; some private-label store credit cards and regional debit networks are excluded. Eighty-three percent of the credit-issuing market had agreed to be part of Apple Pay when it launched, but that left almost 20 percent of the market unsupported. Recently, Apple has started to expand its coverage. For example, the company worked out an agreement with Discover in April 2015 that will give Discover customers access to the app beginning the following fall. Further expansion will be critical if Apple is to ensure that any customer is able to make a transaction from his or her bank.
3. Security. For consumers and merchants alike, the fear of a breach is currently the number one obstacle to adopting mobile payments. Retailers have taken to heart the experience of Target, whose profits declined 45 percent after its well-publicized security breach in late 2013. The CEO and CIO were let go, and the company spent $250 million, excluding insurance offsets, to address the issue. Apple advanced the game by “tokenizing” the transaction (removing account information from the data flow) and using fingerprint recognition technology. But until end-to-end encryption is in place to secure the entire transaction, security holes will persist. Financial institutions and merchants will continue to battle global criminal sophisticates.
4. Platform integration. Many single-point mobile innovations exist, but they do not fit together seamlessly. One such app is Milo (acquired by eBay in 2010), which performs online searches for specific products in stores near its users’ location. Another app acquired by eBay in 2010, RedLaser, allows consumers to scan a product’s barcode in a store and immediately uncover the lowest price for that product, online or at nearby retailers. Some major retailers, including Starbucks and Walmart, have their own mobile apps with payment capabilities. At first glance, apps like these may seem to offer little more than convenient electronic credit cards. But an app, compared to a mobile-based website, is a more controllable, customizable handheld environment for the retailer. It enables businesses to better analyze customers even as customers gain more intelligence about products and services. With the people on both sides of the point-of-sale becoming smarter about one another, the behavior of shoppers and retailers is poised to change. As part of this transition, retailer loyalty, reward, and payment programs need to be supported by and integrated into shared mobile payment platforms.
5. Marketing data integration. Historically, for a variety of reasons, merchants have been unable to consistently and correctly link an individual consumer record directly to every payment transaction. Today, the convergence of the mobile phone, the payment transaction, and the online environment enables companies to track individual customers from the initial marketing impression all the way through the purchase. Those providers that leverage their mobile platforms for one-to-one marketing — before, during, and after a retail payment transaction — will have a leg up on the competition. They can achieve the holy grail of consumer marketing: precise marketing ROI calculations for segments of one. For example, a merchant could send a digital coupon via text and allow consumers to opt in, and then send personalized reminders only to those consumers. The merchant could then track coupon usage from mobile payments to determine the conversion rate and the overall marketing ROI.
Such scenarios hold great promise. But realizing them requires the establishment a complex web of institutional relationships. Who will track the data? Who will store the data? How will different institutions coordinate? Which standards will be used? And what emerging business models can monetize the new value creation? Not all the answers are obvious. But it is clear that traditional banks and financial institutions will find their greatest opportunity by leveraging their data. When financial institutions couple internal data with external data sources, they can begin to help merchants grow their business and provide consumers with a more personalized and robust shopping experience. The winners will convert that data into enhanced solutions across the value chain: targeted local and national offers, multifactor authentication, and security alerts.
The payment providers that stitch together merchant acceptance, mobile solution integration, and marketing fueled by data will be well on their way to success. Once that finally happens — and it will — customer relationships and marketing will never be the same.
Mapping the Digital Shopper’s DNA
Excellent article from McKinsey – “Cracking the digital-shopper genome“. Companies have more data at their fingertips than ever, so why do online shoppers remain such a mystery? The solution begins with bringing all the information together to form a meaningful picture of the consumer.
While every e-commerce company wants a comprehensive view of its customers, few put in place a disciplined system for collecting and organizing those insights. In the same way that cracking the human genome requires decoding the DNA packages it comprises, companies should aspire to create a complete picture of the customer across a complete set of shopper characteristics:
Customer decision journey captures customer behavioral pathways and attitudes at each stage of a purchase journey. A customer may initially look for inspiration (ideas on what to buy) and then information (product descriptions, reviews, informational blogging content) before seeking the best way to buy a given service or product. Interactions can be tailored to this process: for example, one technology manufacturer seeks to identify shoppers on its website who are early in their journey and ensures they don’t see pricing promotions, which are instead offered to visitors who are closer to actually buying.
Digital-channel preference highlights how a shopper prefers to interact with a brand. These insights come from understanding how customers interact through various digital channels—such as apps, e-mail, social media, and video—and the ultimate value of that interaction. The most sophisticated approaches then map these channel preferences to phases of the customer decision journey to create a clear picture of the customer’s cross-channel experience.
Product affinity details what products and product attributes customers prefer across brands and categories. These insights are based on where customers spend their time while visiting a website and on their product-purchase history, analyzed for “key preference indicators” that help to create useful product taxonomies, such as whether the customer shows a preference for a certain designer or style. In structuring a taxonomy based on this behavior, retailer The Children’s Place, for example, uses Demandware’s CQuotient to analyze language in customer reviews and selects the most relevant words to inform a product’s metadata.
Response to offers details how customers respond to various offers and what incrementally results from those interactions. These responses track how coupon offers, discounts, and loyalty rewards, for instance, affect customer-shopping behavior at a level of detail that allows an e-commerce company to understand which offers yield the most cost-effective payoff by customer segment and, eventually, by individual customer.
Life moments and context looks at episodes in a customer’s life (such as having a child, getting a new job, or moving house) and behavior during seasonal events (such as at Christmas or on vacation). This analysis provides a better understanding of the consumer based on how much time typically elapses between purchases and whether the customer is buying consumables or durables (such as furnishing a new home or office).
Demographics, preferences, and needs provide insights about shoppers based on information beyond interactions with a specific e-commerce company. In recent years, there’s been impressive growth in the quantity and quality of data aggregated about customers at the “abstracted ID” level (that is, information that is not personally identifiable). Sophisticated data aggregators such as Acxiom and Nielsen’s eXelate are able to append not only demographic data such as age, gender, or zip code–based income level but also preferences and intent deciphered from browsing behavior across networks of hundreds of websites. For this to work over time, e-commerce companies need to adhere to strict standards about keeping data abstracted and respecting consumer privacy.
Let’s not regulate away the competition fintech can bring
Fintech firms are infiltrating all areas of financial services, from payments platforms, lending, capital raising, investment, advice, insurance to capital markets.
Fintech firms, which are essentially disruptive digital finance models, can help lower barriers to entry in financial services. They are also reducing transaction costs, addressing issues of information asymmetry, empowering consumers and facilitating international linkages. All these contribute to the regulatory goals of efficiency and fairness.
Around the world governments have responded positively to the new models, recognising the benefits of driving competition, and increasing access to markets, especially in areas such as small business finance. Models such as peer-to-peer lending and Crowd Sourced Equity Funding (CSEF) have been welcomed in the US, Europe and the UK.
Governments in the UK, Canada and New Zealand have either implemented or are finalising the implementation of regulatory regimes to support CSEF. In the US, the enactment of the Jumpstart our Business Start ups (JOBS) Act, which referenced the importance of online funding for start-ups and other companies to raise capital, was pivotal for CSEF globally.
But facilitating the practical move to market of new digital finance companies is a challenge for both the fintech firms and regulators.
Compliance challenge
Young fintech companies with limited resources face a significant hurdle navigating the maze of regulatory licensing and compliance requirements. Much of this has been developed for far more mature and larger organisations.
On the other hand, regulators struggle to balance openness to innovation and disruptive technologies with protecting the interests of consumers, investors and the privacy of individuals.
The United Kingdom has led the way in many respects. The Financial Conduct Authority has established a network called Project Innovate which supports industry innovation to improve consumer outcomes. The UK fintech industry also has its own industry body, Innovate Finance, to support technology-led financial services innovators.
In Australia, it’s the Australian Securities and Investments Commission that’s tasked with licensing and monitoring fintechs. This needs to be done with a keen eye also on the need to enhance competition in the system and build confidence in the new models to ensure participation by both investors and consumers.
To assist in this process ASIC has developed an “innovation hub”. This is a single point of entry to the system for innovators seeking to gain regulatory approval and thereby make it easier for them to navigate the regulatory system.
Collaborative approach
The innovation hub can be linked to a growing realisation that collaboration is required between financial sector innovators, consumer groups, academics, relevant government agencies and regulators to deal with complexity and examine opportunities from a system-wide perspective.
Accordingly ASIC has also announced the establishment of a Digital Finance Advisory Committee, with representation from the fintech community, consumers and academics. The focus is on streamlining ASIC’s approach to facilitating new business models with common application processes, including applying for or varying a licence and in granting waivers from the law.
These initiatives represent a significant departure for Australian regulators for two key reasons. First, our regulatory agencies have tended to retain a low-risk, conservative approach to regulation. On an international basis, this stood us in good stead throughout the global financial crisis, when our regulatory agencies were recognised internationally for their prudent approach. But this had increasingly become a barrier to innovation.
Second, this is a highly collaborative approach, something that is generally not a hallmark of Australian business. The lack of collaboration has been a weakness of our system in generating interaction between researchers, innovators, corporates and regulators and we have been amongst the least collaborative of all OECD countries, to our cost. Bringing together these parties through the Digital Finance Advisory Committee, and fintech hubs such as Stone and Chalk, could lay a foundation for more sustainable financial services innovation.
Opening the way to greater financial innovation is especially important in an economy with such a highly concentrated financial sector.
Ultimately the ability of key agencies to adapt and adjust risk levels to accommodate disruptive technologies will impact on both the domestic and international competitiveness of Australian finance sector, to the benefit of Australian consumers and businesses.
Author:
Professor of Finance and Director at Monash University‘Internet Of Things’ Connected Devices To Almost Triple To Over 38 Billion Units By 2020
Total Device Base Driven by Surge in Connected Industrial Applications
New data from Juniper Research has revealed that the number of IoT (Internet of Things) connected devices will number 38.5 billion in 2020, up from 13.4 billion in 2015: a rise of over 285%.
While IoT ‘smart home’ based applications grab media headlines, it is the industrial and public services sector – such as retail, agriculture, smart buildings and smart grid applications – that will form the majority of the device base. This is due in no small part to a much stronger business case for these types of applications.
Michelin and John Deere, for example, have successfully transitioned their businesses towards being service based companies through the use of IoT, as opposed to their previous incarnations as product vendors.
IoT Still in a Nascent State
The new research, The Internet of Things: Consumer, Industrial & Public Services 2015-2020, found that while the number of connected devices already exceeds the number of humans on the planet by over 2 times, for most enterprises, simply connecting their systems and devices remains the first priority.
‘We’re still at an early stage for IoT’, noted research author Steffen Sorrell. ‘Knowing what information to gather, and how to integrate that into back office systems remains a huge challenge.’
Additionally, interoperability hurdles owing to conflicting standards continues to slow progress. Nevertheless, there are signs that standards bodies and alliances are beginning to engage to overcome these hurdles.
Data as Information Becomes Key
According to Juniper Research, the IoT ‘represents the combination of devices and software systems, connected via the Internet, that produce, receive and analyse data. These systems must have the aim of transcending traditional siloed ecosystems of electronic information in order to improve quality of life, efficiency, create value and reduce cost.’
The research notes that the IoT, therefore, is as effective as the sum of its parts. Mere connections create data; however, this does not become information until it is gathered, analysed and understood. The analytics back-end systems of the IoT will therefore form the backbone of its long-term success.
Additional Findings
- The consumer segment (composed of the smart home, connected vehicles and digital healthcare), represents a high ARPU (average revenue per user) market segment.
- Meanwhile, the industrial sector (composed of retail, connected buildings and agriculture) will enable high ROI (return on investment) through IoT projects, owing to more efficient business processes.
The whitepaper, IoT ~ Internet of Transformation, is available to download from the Juniper Research website together with further details of the new research and interactive dataset.
Juniper Research provides research and analytical services to the global hi-tech communications sector, providing consultancy, analyst reports and industry commentary.
SOURCE: Juniper Research Ltd
Why Companies Need Their Customers to ‘Love’ Them
Important opinion piece in Knowledge@Wharton, which reaffirms our view that most traditional organisations still do not “get” the digital transformation which is under way – as discussed in our “Quiet Revolution” report.
For many of us, Google, Apple, Facebook and Amazon (the GAFA four), feel as essential as the air we breathe. It’s hard to imagine our lives — working, socializing, shopping and entertaining — without them.
They are outstanding in the intimacy that they create with their customers. They make a strong effort to understand the unique characteristics and preferences of each customer and use the insights that they gain to serve the customer better. Further, they see each customer as a complete personality with needs around different facets such as work, play, socializing and self. They serve these needs wholly — and this, in turn, encourages more sharing and openness from their customers.
In short, these four companies are building a long-term, holistic and generous relationship with their customers. It’s almost as if they love us — not like our parents or spouses, of course, but by way of “unselfish, loyal and benevolent concern for the good of another.” And the result? In 2014, Google’s revenue was up 19% year over year, Amazon’s sales were up 20% year over year, Facebook’s revenue was up 58% year over year and Apple’s revenue was up 7%, ending the year with their best-ever quarter.
Traditional brands are trying to join the game and gain that essential-as-air quality. But they find it difficult to move beyond a transactional relationship. Usually, we only see and hear from them when they want something from us. These companies are very focused on whether or not customers are loyal to them, but they rarely consider how loyal they are to their customers. Their actions often seem self-serving; soliciting Facebook “likes” to promote themselves. This short-sighted approach stems from a common view that customers don’t have much to offer beyond what’s in their wallets.
Most established firms remain hesitant when it comes to this type of customer equality and mutuality. Our research on the business models of the S&P 500 Index companies (based on data from 1972 to 2013) indicates that at present more than 80% of companies employ older business models where customers are valued only for their dollars and not for their assets, insights and contributions. If your organization can break ranks and adopt this new way of thinking and acting, you will see that the more you share with your customers and the more you understand them, the more they will love you.
Windows 10 is not really free: you are paying for it with your privacy
Windows 10, it seems, is proving a hit with both the public and the technology press after its release last week. After two days, it had been installed on 67 million PCs. Of course, sceptics may argue that this may have simply been a reflection of how much people disliked Windows 8 and the fact that the upgrade was free.
For others, though, it is the very fact that the upgrade is free that has them concerned that Microsoft has adopted a new, “freemium” model for making money from its operating system.
They argue that, while Apple can make its upgrades free because it makes its money from the hardware it sells, Microsoft will have to find some way to make money from doing the same with its software. Given that there are only a few ways of doing this, it seems that Microsoft has taken a shotgun approach and adopted them all.
Free upgrade
Chris Capossela, Microsoft’s Chief Marketing Officer, has declared that Microsoft’s strategy is to “acquire, engage, enlist and monetise”. In other words, get people using the platform and then sell them other things like apps from the Microsoft App Store.
The trouble is, that isn’t the only strategy that Microsoft is taking. Microsoft is employing a unique “advertising ID” that is assigned to a user when Windows 10 is installed. This is used to target personalised ads at the user.
These ads will show up whilst using the web, and even in games that have been downloaded from the Microsoft App Store. In fact, the game where this grabbed most attention was Microsoft’s Solitaire, where users are shown video ads unless they are prepared to pay a US$9.99 a year subscription fee.
The advertising ID, along with a range of information about the user, can be used to target ads. The information that Microsoft will use includes:
[…] current location, search query, or the content you are viewing. […] likely interests or other information that we learn about you over time using demographic data, search queries, interests and favorites, usage data, and location data.
It was not that long ago that Microsoft attacked Google for doing exactly this to its customers.
What Microsoft is prepared to share, though, doesn’t stop at the data it uses for advertising. Although it maintains that it won’t use personal communications, emails, photos, videos and files for advertising, it can and will share this information with third parties for a range of other reasons.
The most explicit of these reasons is sharing data in order to “comply with applicable law or respond to valid legal process, including from law enforcement or other government agencies”. In other words, if a government or security agency asks for it, Microsoft will hand it over.
Meaningful transparency
In June, Horacio Gutiérrez, Deputy General Counsel & Corporate Vice President of Legal and Corporate Affairs at Microsoft, made a commitment to “providing a singular, straightforward resource for understanding Microsoft’s commitments for protecting individual privacy with these services”.
On the Microsoft blog, he stated:
In a world of more personalized computing, customers need meaningful transparency and privacy protections. And those aren’t possible unless we get the basics right. For consumer services, that starts with clear terms and policies that both respect individual privacy and don’t require a law degree to read.
This sits in contrast to Microsoft’s privacy statement, which is a 38 page, 17,000 word document. This suggests that Microsoft really didn’t want to make the basic issues of its implementation absolutely clear to users.
Likewise, the settings that allow a user to control all aspects of privacy in Windows 10 itself are spread over 13 separate screens.
Also buried in the privacy statement is the types of data Cortana – Microsoft’s answer to Apple’s Siri or Google Now – uses. This includes:
[…] device location, data from your calendar, the apps you use, data from your emails and text messages, who you call, your contacts and how often you interact with them on your device. Cortana also learns about you by collecting data about how you use your device and other Microsoft services, such as your music, alarm settings, whether the lock screen is on, what you view and purchase, your browse and Bing search history, and more.
Note that the “and more” statement basically covers everything that you do on a device. Nothing, in principle, is excluded.
Privacy by default
It is very difficult to trust any company that does not take a “security and privacy by default” approach to its products, and then makes it deliberately difficult to actually change settings in order to implement a user’s preferences for privacy settings.
This has manifested itself in another Windows 10 feature called WiFi Sense that has had even experts confused about the default settings and its potential to be a security hole.
WiFi Sense allows a Windows 10 user to share access to their WiFi with their friends and contacts on Facebook, Skype and Outlook. The confusion has arisen because some of the settings are on by default, even though a user needs to explicitly choose a network to share and initiate the process.
Again, Microsoft has taken an approach in which the specific privacy and security dangers are hidden in a single setting. There is no way to possibly vet who, amongst several hundred contacts, you really wanted to share your network with.
There are steps users can take to mitigate the worst of the privacy issues with Windows 10, and these are highly recommended. Microsoft should have allowed users to pay a regular fee for the product in exchange for a guarantee of the levels of privacy its users deserve.
, Director of UWA Centre for Software Practice at University of Western Australia
ACCC releases comparator website guidance
The Australian Competition and Consumer Commission has released consumer and industry guidance on the operation and use of comparator websites.
“The consumer guidance offers tips to help consumers get the best outcomes when using comparator websites. The industry guidance sets out the standards that the ACCC expects comparator websites to meet,” ACCC Deputy Chair Delia Rickard said.
“Comparator websites can drive competition and assist consumers to make informed purchasing decisions when comparing what are often quite complex products. However, the ACCC is concerned that poor conduct by some industry participants can mislead consumers,” Ms Rickard said.
The consumer guidance sets out tips that can assist consumers to understand and benefit from comparator websites, including:
- Making sure they know what is being compared
- Understanding commercial relationships
- Know what their needs are.
The industry guidance is targeted at the operators of comparator websites and businesses whose products are listed on them. This guidance sets out how industry can comply with competition and consumer protection laws, including setting out three guiding principles of:
- Facilitating honest, like for like comparisons
- Being transparent about commercial relationships
- Clearly disclosing who and what is being compared.
“Operators should carefully read this guidance as there will be no excuse for non-compliance with the Australian Consumer Law, and the ACCC will continue to take action where necessary,” Ms Rickard said.
The ACCC’s recent review of comparator websites was prompted by consumer and business complaints of misleading information being provided to consumers. The ACCC found that a number of websites, in particular those comparing energy plans, included information that may mislead consumers as to the extent of the comparison service, the amount of savings that could be achieved and the impartiality of the comparisons.
Following contact by the ACCC, website operators quickly implemented appropriate changes to remove or amend the potentially misleading information.
In November 2014, the ACCC released a report The Comparator Website Industry in Australia. The report set out the ACCC’s concerns over a lack of transparency in regards to the:
- extent of the comparison service, including market coverage
- savings achieved by using the comparison service
- comparison services being unbiased, impartial or independent
- value rankings
- undisclosed commercial relationships affecting recommendations to consumers
- content and quality assurance of product information.
The guidance is available at:
Structure and Liquidity in Treasury Markets
Extract from a speech by Governor Powell at the Brookings Institution, Washington. The move to fully electronic trading raises important questions about the benefits of fully automated high-speed trading which may lead to industry concentration and liquidity fracturing as the arms-race continues. So it is a good time for market participants and regulators to collectively consider whether current market structures can be improved for the benefit of all.
Treasury markets have undergone important changes over the years. The footprints of the major dealers, who have long played the role of market makers, are in several respects smaller than they were in the pre-crisis period. Dealers cite a number of reasons for this change, including reductions in their own risk appetite and the effects of post-crisis regulations. At the same time, the Federal Reserve and foreign owners (about half of which are foreign central banks) have increased their ownership to over two-thirds of outstanding Treasuries (up from 61 percent in 2004). Banks have also increased their holdings of Treasuries to meet HQLA requirements. These holdings are less likely to turn over in secondary market trading, as the owners largely follow buy and hold strategies. Another change is the increased presence of asset managers, which now hold a bigger share of Treasuries as well. Mutual fund investors, who are accustomed to daily liquidity, now beneficially own a greater share of Treasuries.
Perhaps the most fundamental change in these markets is the move to electronic trading, which began in earnest about 15 years ago. It is hard to overstate the transformation in these markets. Only two decades ago, the dealers who participated in primary Treasury auctions had to send representatives, in person, to the offices of the Federal Reserve Bank of New York to submit their bids on auction days. They dropped their paper bids into a box. The secondary market was a bit more advanced. There were electronic systems for posting interdealer quotes in the cash market, and the Globex platform had been introduced for futures. Still, most interdealer trades were conducted over the phone and futures trading was primarily conducted in the open pit.
Today these markets are almost fully electronic. Interdealer trading in the cash Treasury market is conducted over electronic trading platforms. Thanks to advances in telecommunications and computing, the speed of trading has increased at least a million-fold. Advances in computing and faster access to trading platforms have also allowed new types of firms and trading strategies to enter the market. Algorithmic and high-frequency trading firms deploy a wide and diverse range of strategies. In particular, the technologies and strategies that people associate with high frequency trading are also regularly employed by broker-dealers, hedge funds, and even individual investors. Compared with the speed of trading 20 years ago, anyone can trade at high frequencies today, and so, to me, this transformation is more about technology than any one particular type of firm.
Given all these changes, we need to have a more nuanced discussion as to the state of the markets. Are there important market failures that are not likely to self-correct? If so, what are the causes, and what are the costs and benefits of potential market-led or regulatory responses?
Some observers point to post-crisis regulation as a key factor driving any decline or change in the nature of liquidity. Although regulation had little to do with the events of October 15, I would agree that it may be one factor driving recent changes in market making. Requiring that banks hold much higher capital and liquidity and rely less on wholesale short-term debt has raised funding costs. Regulation has also raised the cost of funding inventories through repurchase agreement (repo markets). Thus, regulation may have made market making less attractive to banks. But these same regulations have also materially lowered banks’ probabilities of default and the chances of another financial crisis like the last one, which severely constrained liquidity and did so much damage to our economy. These regulations are new, and we should be willing to learn from experience, but their basic goals–to make the core of the financial system safer and reduce systemic risk–are appropriate, and we should be prepared to accept some increase in the cost of market making in order to meet those goals.
Regulation is only one of the factors–and clearly not the dominant one–behind the evolution in market making. As we have seen, markets were undergoing dramatic change long before the financial crisis. Technological change has allowed new types of trading firms to act as market makers for a large and growing share of transactions, not just in equity and foreign exchange markets but also in Treasury markets. As traditional dealers have lost market share, one way they have sought to remain competitive is by attempting to internalize their customer trades–essentially trying to create their own markets by finding matches between their customers who are seeking to buy and sell. Internalization allows these firms to capture more of the bid-ask spread, but it may also reduce liquidity in the public market. At the same time it does not eliminate the need for a public market, where price discovery mainly occurs, as dealers must place the orders that they cannot internalize into that market.
While the changes I’ve just discussed are unlikely to go away, I believe that markets will adapt to them over time. In the meantime, we have a responsibility to make sure that market and regulatory incentives appropriately encourage an evolution that will sustain market liquidity and functioning.
In thinking about market incentives, one observer has noted that trading rules and structures have grown to matter crucially as trading speeds have increased–in her words, “At very fast speeds, only the [market] microstructure matters. Trading algorithms are, after all, simply a set of rules, and they will necessarily interact with and optimize against the rules of the trading platforms they operate on. If trading is at nanoseconds, there won’t be a lot of “fundamental” news to trade on or much time to formulate views about the long-run value of an asset; instead, trading at these speeds can become a game played against order books and the market rules. We can complain about certain trading practices in this new environment, but if the market is structured to incentivize those practices, then why should we be surprised if they occur?
The trading platforms in both the interdealer cash and futures markets are based on a central limit order book, in which quotes are executed based on price and the order they are posted. A central limit order book provides for continuous trading, but it also provides incentives to be the fastest. A trader that is faster than the others in the market will be able to post and remove orders in reaction to changes in the order book before others can do so, earning profits by hitting out-of-date quotes and avoiding losses by making sure that the trader’s own quotes are up to date.
Technology and greater competition have led to lower costs in many areas of our economy. At the same time, slower traders may be put at a disadvantage in this environment, which could cause them to withdraw from markets or seek other venues, thus fracturing liquidity. And one can certainly question how socially useful it is to build optic fiber or microwave networks just to trade at microseconds or nanoseconds rather than milliseconds. The cost of these technologies, among other factors, may also be driving greater concentration in markets, which could threaten their resilience. The type of internalization now done by dealers is only really profitable if done on a large scale, and that too has led to greater market concentration.
A number of observers have suggested reforms for consideration. For example, some recent commentators propose frequent batch auctions as an alternative to the central limit order book, and argue that this would lead to greater market liquidity. Others have argued that current market structures may lead to greater volatility, and suggested possible alterations designed to improve the situation. To be clear, I am not embracing any particular one of these ideas. Rather, I am suggesting that now is a good time for market participants and regulators to collectively consider whether current market structures can be improved for the benefit of all.