51Թ

2024-2025

Area: Organizational Behaviour

Date: 22 November 2024
Time: 10.30am – 12.00pm
Room: Armstrong 375

Guest Speaker: Emilio J. Castilla (MIT Sloan School of Management)

Topic: The Unfulfilled Promise of Meritocracy in Organizations

What biases and obstacles get in the way when companies seek to attract, retain, and reward the best-performing employees? Given the widely popular goals of promoting meritocracy and creating opportunities inside organizations, I have for the past decade conducted research across multiple organizations to investigate the role that merit, performance evaluations, and other talent-management practices play in shaping employees’ careers in today’s workplaces. I have found evidence of variation in how leaders and managers define merit and consequently make merit-based employment decisions, depending on the organizational context they work in as well as the characteristics of the individuals they screen and evaluate. In fact, I have shown that meritocratic goals, under certain organizational circumstances, can introduce biases in favor of white men compared to women and racial minorities. In my presentation, I will discuss the key findings of some of my projects on achieving true meritocracy and excellence in organizations. In so doing, I will highlight the practical insights of my research into the areas of employment, organizations, and workplace inequality—a topic that is the focus of a new book I am writing.


Area: Operations Management

Date: 6 December 2024
Time: 10am – 11am
Room: Bronfman 245

Guest Speaker: Tinglong Dai (Carey Business School, Johns Hopkins University)

Topic: Global Advances in Medical Artificial Intelligence

Artificial intelligence (AI) has emerged as a transformative force in healthcare, with the U.S. Food and Drug Administration (FDA) approving 950 medical AI devices as of June 2024—a remarkable increase from 343 devices just three years ago, primarily for diagnostic and screening purposes. Yet, the real-world use of medical AI remains limited. Only a handful of FDA-cleared AI devices have more than 10,000 insurance claims as of June 2023, representing just one diagnostic area (coronary artery disease) and a single screening target (diabetic retinopathy). Scaling medical AI calls for a new stream of business research to advance medical AI globally (as opposed to locally), grounded in theoretical inquiry and real-world evidence at the micro, meso, and macro levels. Corresponding to each of these levels, this talk will address the behavioral, incentive, and policy aspects of advancing medical AI:Behavioral: I will share findings from a randomized controlled survey at Johns Hopkins Medicine that explores physicians' attitudes toward their peers' use of generative AI and the implications for AI and trust.


Area: Accounting

Date: 21 February 2025
Time: 10.30am – 12pm
Room: Bronfman 210

Guest Speaker: Vivian Fang (Indiana University)

Topic: Does Corporate Purpose Conflict With Shareholder Returns?

This paper studies the link between purpose statements and future shareholder returns. We use deep neural networks to identify a “purpose statement” as one that views stakeholder value as a means to ultimately improve shareholder value, in contrast to “purpose-like” statements that put shareholders and stakeholders on an equal footing. A value-weighted portfolio of companies with purpose statements earns a 0.25% monthly alpha above characteristics benchmarks; a long-short portfolio that buys firms with purpose statements and sells those with purpose-like statements earns a 0.28% monthly alpha. These results are stronger for large firms, consistent with their greater latitude and pressure to invest in stakeholders even if inconsistent with shareholder value. Purpose statements are positively linked to future earnings surprises, suggesting a channel through which they lead to higher stock returns, but purpose-like statements are not. They are positively associated with unvested but not vested CEO equity ownership, suggesting that long-term equity more effectively aligns managers with shareholders’ long-term interests.


Area: Information Systems

Date: 28 February 2025
Time: 10.30am – 12pm
Room: Bronfman 245

Guest Speaker: Dr. Ling Xue )Terry College of Business, University of Georgia)

Topic: Using Generative AI to Address Puffery Advertising: Evidence from Two Field Studies

Using Generative AI to Address Puffery Advertising: Evidence from Two Field Studies Abstract Puffery advertising raises concerns about information manipulation in advertisements. This study uses two field studies to examine how generative artificial intelligence (GAI) can correct puffery while maintaining ad attractiveness. In the first study, taking advantage of a special context where media platforms enforce varying puffery tolerance policies, we examine cases where the same advertisement is corrected by some platforms but remains unaltered by others. The findings of analyzing 295,403 real advertising exposures show that using GPT to address puffery content significantly increases the clickthrough probability of advertisement by 16%. We further conducted a randomized field experiment on 32,200 advertising exposures, using prompt engineering to guide GPT in revising each of twelve individual linguistic and emotional features of the advertisements. The results reveal that enhancing linguistic readability is the most effective approach for transforming puffery advertisements into attractive and legitimate ones. The study generates important implications on how GAI can be used to effectively address puffery advertising and increase marketing performance. It also illustrates that puffery advertising may not always be as luring as it may appear. The tackling of puffery advertising by GAI can not only resolve ethical concerns in advertising but also enhance advertising effectiveness. Keywords: puffery advertising, generative artificial intelligence, linguistic feature, emotion, ChatGPT, prompt engineering.

Back to top