Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2025-11-25 17:05:26| Fast Company

The line between human and machine authorship is blurring, particularly as its become increasingly difficult to tell whether something was written by a person or AI. Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence. As a scholar who explores how AI is built, how people are using it in their everyday lives, and how its affecting culture, Ive thought a lot about what this technology can do and where it falls short. If youre more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to? It isnt all or nothing Thinking about these questions reminded me of Umberto Ecos essay Apocalyptic and Integrated, which was originally written in the early 1960s. Parts of it were later included in an anthology titled Apocalypse Postponed, which I first read as a college student in Italy. In it, Eco draws a contrast between two attitudes toward mass media. There are the apocalyptics who fear cultural degradation and moral collapse. Then there are the integrated who champion new media technologies as a democratizing force for culture. Back then, Eco was writing about the proliferation of TV and radio. Today, youll often see similar reactions to AI. Yet Eco argued that both positions were too extreme. It isnt helpful, he wrote, to see new media as either a dire threat or a miracle. Instead, he urged readers to look at how people and communities use these new tools, what risks and opportunities they create, and how they shapeand sometimes reinforcepower structures. While I was teaching a course on deepfakes during the 2024 election, Ecos lesson also came back to me. Those were days when some scholars and media outlets were regularly warning of an imminent deepfake apocalypse. Would deepfakes be used to mimic major political figures and push targeted disinformation? What if, on the eve of an election, generative AI was used to mimic the voice of a candidate on a robocall telling voters to stay home? Those fears werent groundless: Research shows that people arent especially good at identifying deepfakes. At the same time, they consistently overestimate their ability to do so. In the end, though, the apocalypse was postponed. Post-election analyses found that deepfakes did seem to intensify some ongoing political trends, such as the erosion of trust and polarization, but theres no evidence that they affected the final outcome of the election. Listicles, news updates, and how-to guides Of course, the fears that AI raises for supporters of democracy are not the same as those it creates for writers and artists. For them, the core concerns are about authorship: How can one person compete with a system trained on millions of voices that can produce text at hyper-speed? And if this becomes the norm, what will it do to creative work, both as an occupation and as a source of meaning? Its important to clarify whats meant by online content, the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements. A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews, and product explainers. The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business. A whole industry of writersmostly freelance, including many translatorshas relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text, and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them. Collaborating with AI The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity. How can you distinguish a human-written article from a machine-generated one? And does that ability even matter? Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI. A writer might draft a few lines, let an AI expand them, and then reshape that output into the final text. This article is no exception. As a non-native English speaker, I often rely on AI to refine my language before sending drafts to an editor. At times, the system attempts to reshape what I mean. But once its stylistic tendencies become familiar, it becomes possible to avoid them and maintain a personal tone. Also, artificial intelligence is not entirely artificial, since it is trained on human-made material. Its worth noting that even before AI, human writing has never been entirely human, either. Every technology,from parchment and stylus paper to the typewriter and now AI, has shaped how people write and how readers make sense of it. Another important point: AI models are increasingly trained on datasets that include not only human writing but also AI-generated and humanAI coproduced text. This has raised concerns about their ability to continue improving over time. Some commentators have already described a sense of disillusionment following the release of newer large models, with companies struggling to deliver on their promises. Human voices may matter even more But what happens when people become overly reliant on AI in their writing? Some studies show that writers may feel more creative when they use artificial intelligence for brainstorming, yet the range of ideas often becomes narrower. This uniformity affects style as well: These systems tend to pull users toward similar patterns of wording, which reduces the differences that usually mark an individual voice. Researchers also note a shift toward Westernand especially English-speakingnorms in the writing of people from other cultures, raising concerns about a new form of AI colonialism. In this context, texts that display originality, voice, and stylistic intention are likely to become even more meaningful within the media landscape, and they may play a crucial role in training the next generations of models. If you set aside the more apocalyptic scenarios and assume that AI will continue to advanceperhaps at a slower pace than in the recent pastits quite possible that thoughtful, original, human-generated writing will become even more valuable. Put another way: The work of writers, journalists, and intellectuals will not become superfluous simply because much of the web is no longer written by humans. Francesco Agnellini is a lecturer in digital and data studies at Binghamton University, State University of New York. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

2025-11-25 17:00:00| Fast Company

When a company with tens of thousands of software engineers found that uptake of a new AI-powered tool was lagging well below 50%, they wanted to know why. It turned out that the problem wasnt the technology itself. What was holding the company back was a mindset that saw AI use as akin to cheating. Those who used the tool were perceived as less skilled than their colleagues, even when their work output was identical. Not surprisingly, most of the engineers chose not to risk their reputations and carried on working in the traditional way. These kinds of self-defeating attitudes arent limited to one companythey are endemic across the business world. Organizations are being held back because they are importing negative ideas about AI from contexts where they make sense into corporate settings where they dont. The result is a toxic combination of stigma, unhelpful policies, and a fundamental misunderstanding of what actually matters in business. The path forward involves setting aside these confusions and embracing a simpler principle: Artificial intelligence should be treated like any other powerful business tool. This article shares what I have learned over the past six months while revising the AI use policies for my own companies, drawing on the research and insights of my internal working group (Paul Scade, Pranay Sanklecha, and Rian Hoque). {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":""}} Confusing Contexts In educational contexts, it is entirely appropriate to be suspicious about generative AI. School and college assessments exist for a specific purpose: to demonstrate that students have acquired the skills and the knowledge they are studying. Feeding a prompt into ChatGPT and then handing in the essay it generates undermines the reason for writing the essay in the first place. When it comes to artistic outputs, like works of fiction or paintings, there are legitimate philosophical debates about whether AI-generated work can ever possess creative authenticity and artistic value. And there are tough questions about where the line might lie when it comes to using AI tools for assistance. But issues like these are almost entirely irrelevant to business operations. In business, success is measured by results and results alone. Does your marketing copy persuade customers to buy? Yes or no? Does your report clarify complex issues for stakeholders? Does your presentation convince the board to approve your proposal? The only metrics that matter in these cases are accuracy, coherence, and effectivenessnot the contents origin story. When we import the principles that govern legitimate AI use in other areas into our discussion of its use in business, we undermine our ability to take full advantage of this powerful technology.   The Disclosure Distraction Public discussions about AI often focus on the dangers that follow from allowing generative AI outputs into public spaces. From the dead internet theory to arguments about whether it should be a legal requirement to label AI outputs on social media, policymakers and commentators are rightly concerned about malicious AI use infiltrating and undermining the public discourse. Concerns like these have made rules about disclosure of AI use central to many corporate AI use policies. But theres a problem here. While these discussions and concerns are perfectly legitimate when it comes to AI agents shaping debates around social and political issues, importing these suspicions into business contexts can be damaging. Studies consistently show that disclosed AI use triggers negative bias within companies, even when that use is explicitly encouraged and when the output quality is identical to human-created content. The study mentioned at the start of this article found that internal reviewers assessed the same work output to be less competent when they were told that AI had been used in its production than when they were told it had not been, even when the AI tools in question were known to increase productivity and when their use was encouraged by the employer. Similarly, a meta-analysis of 13 experiments published this year identified a consistent loss of trust in those who disclose their AI use. Even respondents who felt positively about AI use themselves tended to feel higher distrust toward colleagues who used it. This kind of irrational prejudice creates a chilling effect on the innovative use of AI within businesses. Disclosure mandates for the use of AI tools reflect organizational immaturity and fear-based policymaking. They treat AI as a kind of contagion and create stigma around a tool that should be as uncontroversial as using spell-check or design templatesor having the communications team prepare a statement for the CEO to sign off on. Companies that focus on disclosure are missing the forest for the trees. They have become so worried about the process that theyre ignoring what actually matters: the quality of the output. The Ownership Imperative The solution to both context confusion and the distracting push for disclosure is simple: Treat AI like a perfectly normalalbeit powerfultechnological tool, and insist that the humans who use it take full ownership of whatever they produce. This shift in mindset cuts through the confused thinking that plagues current AI policies. When you stop treating AI as something exotic that requires special labels and start treating it as you would any other business tool, the path forward becomes clear. You wouldnt disclose that you used Excel to create a budget or used PowerPoint to design a presentation. What matters isnt the toolit is whether you stand behind the work. But heres the crucial part: Treating artificial intelligence as normal technology doesnt mean you can play fast and loose with it. Quite the opposite. Once we put aside concepts that are irrelevant in a business context, like creative authenticity and cheating, we are left with something more fundamental: accountability. When AI is just another tool in your tool kit, you own the output completely, whether you like it or not. Every mistake, every inadequacy, every breach of the rules belongs to the human who sends the content out into the world. If the AI plagiarizes and you use that text, youve plagiarized. If the AI gets facts wrong and you share them, they’re your factual errors. If the AI produces generic, weak, unconvincing language and you choose to use it, youve communicated poorly. No client, regulator, or stakeholder will accept the AI did it as an excuse. This reality demands rigorous verification, editing, and fact-checking as nonnegotiable components of the AI-use workflow. A large consulting company recently learned this lesson when it submitted an error-ridden AI-generated report to the Australian government. The mistakes slipped through because humans in the chain of responsibility treated AI output as finished work rather than as raw material requiring human oversight and ownership. The firm couldnt shift blame to the toolthey owned the embarrassment, the reputational damage, and the client relationship fallout entirely. Taking ownership isnt just about accepting responsibility for errors. It is also about recognizing that once you have reviewed, edited, and approved AI-assisted work, it ceases to be AI output and becomes your human output, produced with AI assistance. This is the mature approach that moves us past disclosure theater and toward genuine accountability. Making the Shift: Owning AI Use Here are four steps your business can take to move from confusion about contexts to the clarity of an ownership mindset. 1. Replace disclosure requirements with ownership confirmation. Stop asking Did you use AI? and start requiring clear accountability statements: I take full responsibility for this content and verify its accuracy. Every piece of work should have a human who explicitly stands behind it, regardless of how it was created. 2. Establish output-focused quality standards. Define success criteria that ignore creation method entirely: Is it accurate? Is it effective? Does it achieve its business objective? Create verification workflows and fact-checking protocols that apply equally to all content. When something fails these standards, the conversation should be about improving the output, not about which tools were used. 3. Normalize AI use through success stories, not policies. Share internal case studies of teams using AI to deliver exceptional results. Celebrate the business outcomesfaster delivery, higher quality, breakthrough insightswithout dwelling on the methodology. Make AI proficiency a valued skill on par with Excel expertise or presentation design, not something requiring special permission or disclosure. 4. Train for ownership, not just usage. Develop training that goes beyond prompting techniques to focus on verification, fact-checking, and quality assessment. Teach employees to treat AI output as raw material that requires their expertise to shape and validate, not as finished work. Include modules on identifying AI hallucinations, verifying claims, and maintaining brand voice. The companies that will thrive in the next year wont be those that unconsciously disincentivize the use of AI through the stigma of disclosure policies. They will be those that see AI for what it is: a powerful tool for achieving business results. While your competitors tie themselves in knots over process documentation and disclosure theater, you can leapfrog past them with a simple principle: Own your output, regardless of how you created it. The question that will separate winners from losers isn’t Did you use AI? but Is this excellent? If you’re still asking the first question, you are already falling behind. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":""}}


Category: E-Commerce

 

2025-11-25 16:22:54| Fast Company

Design flaws caused a Tesla Model 3 to suddenly accelerate out of control before it crashed into a utility pole and burst into flames, killing a woman and severely injuring her husband, a lawsuit filed in federal court alleges.Another defect with the door handle design thwarted bystanders who were trying to rescue the driver, Jeff Dennis, and his wife, Wendy, from the car, according to the lawsuit filed Friday in U.S. District Court for the Western District of Washington.Wendy Dennis died in the Jan. 7, 2023, crash in Tacoma, Washington. Jeff Dennis suffered severe leg burns and other injuries, according to the lawsuit.Messages left Monday with plaintiffs’ attorneys and Tesla were not immediately returned.The lawsuit seeks punitive damages in California since the Dennis’ 2018 Model 3 was designed and manufactured there. Tesla also had its headquarters in California at the time before later moving to Texas.Among other financial claims, the lawsuit seeks wrongful death damages for both Jeff Dennis and his late wife’s estate. It asks for a jury trial.Tesla doors have been at the center of several crash cases because the battery powering the unlocking mechanism shuts off in case of a crash, and the manual releases that override that system are known for being difficult to find.Last month, the parents of two California college students killed in a Tesla crash sued the carmaker, saying the students were trapped in the vehicle as it burst into flames because of a design flaw that prevented them from opening the doors. In September, federal regulators opened an investigation into complaints by Tesla drivers of problems with stuck doors.Jeff and Wendy Dennis were running errands when the Tesla suddenly accelerated for at least five seconds. Jeff Dennis swerved to miss other vehicles before the car hit the utility pole and burst into flames, the lawsuit says.The automatic emergency braking system did not engage before hitting the pole, the lawsuit alleges, even though it is designed to apply the brakes when a frontal collision is considered unavoidable.Bystanders couldn’t open the doors because the handles do not work from the outside because they also rely on battery power to operate.. The doors also couldn’t be opened from inside because the battery had shut off because of the fire, and a manual override button is hard to find and use, the lawsuit alleges.The heat from the fire prevented bystanders from getting close enough to try to break out the windows.Defective battery chemistry and battery pack design unnecessarily increased the risk of a catastrophic fire after the impact with the pole, the lawsuit alleges. Thiessen reported from Anchorage, Alaska. Mark Thiessen, Associated Press


Category: E-Commerce

 

2025-11-25 15:45:33| Fast Company

President Donald Trump is directing the federal government to combine efforts with tech companies and universities to convert government data into scientific discoveries, acting on his push to make artificial intelligence the engine of the nation’s economic future.Trump unveiled the “Genesis Mission” as part of an executive order he signed Monday that directs the Department of Energy and national labs to build a digital platform to concentrate the nation’s scientific data in one place.It solicits private sector and university partners to use their AI capability to help the government solve engineering, energy and national security problems, including streamlining the nation’s electric grid, according to White House officials who spoke to reporters on condition of anonymity to describe the order before it was signed. Officials made no specific mention of seeking medical advances as part of the project.“The Genesis Mission will bring together our Nation’s research and development resources combining the efforts of brilliant American scientists, including those at our national laboratories, with pioneering American businesses; world-renowned universities; and existing research infrastructure, data repositories, production plants, and national security sites to achieve dramatic acceleration in AI development and utilization,” the executive order says.The administration portrayed the effort as the government’s most ambitious marshaling of federal scientific resources since the Apollo space missions of the late 1960s and early 1970s, even as it had cut billions of dollars in federal funding for scientific research and thousands of scientists had lost their jobs and funding.Trump is increasingly counting on the tech sector and the development of AI to power the U.S. economy, made clear last week as he hosted Saudi Arabia’s Crown Prince Mohammed bin Salman. The monarch has committed to investing $1 trillion, largely from the Arab nation’s oil and natural gas reserves, to pivot his nation into becoming an AI data hub.For the U.S.’s part, funding was appropriated to the Energy Department as part of the massive tax-break and spending bill signed into law by Trump in July, White House officials said.As AI raises concerns that its heavy use of electricity may be contributing to higher utility rates in the nearer term, which is a political risk for Trump, administration officials argued that rates will come down as the technology develops. They said the increased demand will build capacity in existing transmission lines and bring down costs per unit of electricity.Data centers needed to fuel AI accounted for about 1.5% of the world’s electricity consumption last year, and those facilities’ energy consumption is predicted to more than double by 2030, according to the International Energy Agency. That increase could lead to burning more fossil fuels such as coal and natural gas, which release greenhouse gases that contribute to warming temperatures, sea level rise and extreme weather.The project will rely on national labs’ supercomputers but will also use supercomputing capacity being developed in the private sector. The project’s use of public data including national security information along with private sector supercomputers prompted officials to issue assurances that there would be controls to respect protected information. Thomas Beaumont, Associated Press


Category: E-Commerce

 

2025-11-25 15:07:11| Fast Company

I keep coming up against a logical fallacy in strategy that I feel compelled to address. The logic holds that when a company has a shareholder-unfriendly component of its portfolio e.g. the business in question is cyclical, or it is low-growth or low margin the company should diversify to make that business less-shareholder unfriendly. I take on the fallacy in this Playing to Win/Practitioner Insights (PTW/PI) piece entitled Diversification Cant Disappear a Strategy Problem: It Just Creates a Different Problem. And as always, you can find all the previous PTW/PI here. The argument The usual motivator of this argument is cyclicality: We have a cyclical business, and shareholders dont like the ups and downs of that business across the cycle, so they discount our stock because of the volatility of our earnings. A memorable example of this for me was Alcan in the 1980s, at that time the worlds best aluminum company and arguably Canadas finest company. But it didnt like the cyclicality of its core business, which was making and selling aluminum ingots. The downstream industries that used aluminum in some way appeared alluringly less cyclical. So, Alcan invested in a number of those businesses including packaging and aluminum structured automobiles. Other shareholder-unfriendly attributes include being a slow-growth business. This causes companies like News Corporation to buy MySpace to get into a fast-growing business Internet services. Another is a business that has experienced a drop in structural attractiveness and hence inherent profitability level, perhaps because buyers are getting more powerful or a supplied input becomes much more expensive. Unfortunately, these diversification efforts dont often succeed. For Alcan, these downstream businesses turned out to have very little in common with the skills and capabilities involved in making ingots of aluminum and were eventually sold off. For example, the packaging portfolio was sold off to Amcor, a global packaging company that knew how to run a packaging business. And News Corporation exited MySpace with its tail between its legs, selling it for $35 million six years after buying it for $580 million I am not opposed to the intent I am in favor of improving ones portfolio of businesses. In fact, I was part of one of the greatest such efforts in recent memory. I was on the board of Thomson Corporation, which started its transformation as the worlds largest newspaper company, the worlds largest textbook publisher (tied with Pearson), Europes largest travel company, and a major player in North Sea oil. It concluded the transformation as Thomson Reuters, the leading supplier of on-line, subscription-based must-have information, analytics, and workflow solutions for legal, financial, accounting, and investor relations professionals having exited its entire starting portfolio.   So, I get it. I like investing in good businesses as much as the next person. I just hate the logic regarding shareholders Shareholders arent geniuses I have said that on numerous occasions (e.g. here and here). But they are not stupid either. Lets say the company is correct that shareholders dont like something about an important business in its portfolio it is cyclical or growing slowly or its industry is becoming less structurally attractive. If that is true, shareholders will collectively price that negative feature into their valuation of that business as part of their overall valuation of the stock. Lets say the contribution of that shareholder-unfriendly business to corporate earnings per share (EPS) is $4/share and that if it wasnt cyclical, shareholders would put a 20X multiple on those earnings. So, it would have contributed $80 towards the companys overall share price. But lets say that because it is cyclical, shareholders discount the value of those earning to a 15X multiple, meaning that the cyclicality of the business costs the company $20 on its share price (i.e., $4 of EPS X 5 times lower multiple). And if there are 50 million shares outstanding, that is a cost of $1 billion in shareholder value due to the cyclicality of that business. The same calculation would hold if it were a slower growing business on which the shareholders similarly put a 15X multiple instead of 20X. Or if a business has experienced a sharp structural drop in future profit potential. The bottom line is that because of the features of the existing business, shareholders subtract $1 billion of value from the overall valuation of the business. Lets continue with the logic. Imagine the company diversifies into a non-cyclical business or fast-growing business or higher profit business. If it is a great business, the shareholders will put a high valuation on it. Lets say that the company buys such a business for $2 billion and it performs so well that shareholders soon value it at $5 billion which makes it a great diversification investment.  But the logic of this argument holds further (implicitly) that over and above the value that shareholders will give to the great new business into which the company diversified, the shareholders will reduce the $1 billion valuation hit that they are applying to the problematic business. Not only can I not think of any reason why shareholders would do that, I have never seen them do it because there is no reason. In the words of the great Nobel Laureate, the late George Stigler, when I met him in his Chicago apartment, Roger, a company cant use its competitive advantage twice brilliant insight from a brilliant man. In this case, it cant use the plus-$5 billion to disappear the minus-$1 billion. In essence, it will be a plus $5 billion and an unchanged minus $1 billion. What is the problem? As I said, I like investing in great new businesses. If there is a $5 billion opportunity available for a $2 billion price, a company is foolish not to grab it. The problem is a company putting itself in the position of believing the presence of the undesirable business creates a requirement to diversify. This is especially the case because the tool used is typically acquisition because organic growth is viewed as taking too long to solve the problem. And the failure rate in acquisitions is legendarily high in the general case. This is a very specific case that makes doing a successful acquisition even harder. There is a very specific requirement of the acquisition it must reduce our overall cyclicality, or increase our overall growth rate, or increase our overall profit margin.  These are hard criteria to meet in an exercise that already has a high degree of difficulty. Additionally, it works against a key principle that helps determine acquisition success. As Ihave written about previously in Harvard Business Review, acquisitions are more financially and strategically successful if they are more about what the acquiring company can do to help the acquired company than the other way around. When the focus is on what the acquired company can do for the acquirer, the acquirer tends to have to pay top dollar for the acquired company and the acquirer can do little to help pay for the high takeover premium, as with the News Corporation-MySpace acquisition above. News Corporation paid absolutely top dollar and it had no idea how to help MySpace as it faced withering competition. Thus, in the failure-ridden world of acquisitions, the logic of this diversify-to-eliminate-the-shareholder-problem drives companies toward very low success rate approaches. Net, there are compounding shortcomings of the approach. First, it doesnt actually solve the problem for which it is designed to solve. And second, it involves engaging in a very high-risk activity. That is not a good combination.  Implications for strategy As I pointed out previously in yet another Harvard Business Review article, companies are better off if they simply value businesses at what they are worth not their value on the books. They may wish that a business was worth as much as or more than the amount of investment put into it. But the instant the investment is irreversibly made into the business in question, its value becomes a function of its future prospects, not its book value. If it was a poor investment, its true value will go down, and the opposite if it was a good investment. That is the valuation that shareholders make every trading minute. They revalue your assets continuously by collectively buying and selling your shares. Why shouldnt you revalue similarly? Bad businesses dont have bad shareholder returns shareholders have long since revalued them downwards. And great businesses dont automatically have great shareholder returns shareholders have long since revalued them upwards. Shareholders get valuation. If you can make a business better great, just do it. But dont try to disguise the shortcomings of a business through diversification. You arent fooling anyone but yourself and certainly not the shareholders. A far better plan is to suck it up and recognize the true value. And if you dont like what you have, sell it and move on. That is what we did at Thomson Reuters. We didnt attempt to disguise the negative attributes of portfolio companies. We got rid of them to companies that liked their attributes better than we did. For example, we sold our newspaper business to the worlds biggest newspaper company, Gannett, for US$2.2 billion. They were enthusiastic but it ended up being a deal that a Gannett CFO later confessed to me was the worst acquisition deal in his companys history. And even better, we sold the textbook business to a pair of delusional private equity firms for US$7.75 billion, and they resold it three years later for a reported US$2.25 billion ouch! The combined divestiture proceeds of US$10 billion were really helpful in bringing the transformation to fruition.   Practitioner insights I try hard not to be disrespectful to the status quo. Most things that stick around for a long time do so because they have shown themselves to make sense. But in the world of business ideas, a minority like SWOT, strategies not strategy, and revenue forecasting stick around even if they fail to make any logical sense. You must be ready to reject them when they are demonstrably dumb ideas. This is one of them. Dont invest in big and high-risk ways to disguise a problem that cant be disguised. It is one of the silliest and most wasteful activities in company life. And there are lots of folks hanging around that make huge returns by whispering in corporate executive ears about this kind of diversification. They are the (so-called) strategy consultants, investment bankers, and M&A lawyers who make countless billions promoting stupid deals, like the disastrous AT&T takeover of Time Warner which AT&T bought for $85 billion and sold for $43 billion three years later. That was the equivalent of the AT&T executive team making a $38 million stack of shareholder money outside AT&T corporate headquarters, pouring gasoline on it and lighting it on fire and repeating that exercise every day for three years. The brilliant deal was purportedly going to get the boring AT&T into the exciting, faster growing and higher margin content business. I predicted at the time that it would be an epic disaster and it most certainly was. It is what happens when you adhere to a loser theory. Instead, either love a business or get rid of it to someone who will love it more. You cant win in a business that you dont love. Competitors who love their business will wipe the floor with you and yours. Only spend time and resources on businesses that you love. Those are the only ones that will get the care, attention and investment that they need and deserve.


Category: E-Commerce

 

Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] next »

Privacy policy . Copyright . Contact form .