I2SDS Seminars: Fall 2015

A Model to Estimate Individual Preferences Using Panel Data

Speaker: Gustavo Vulcano, Stern School of Business, New York University


In a retail operation, customer choices may be affected by stock out and promotion events. Given panel data with the transaction history of each customer, our goal is to predict future purchases. We use a general nonparametric framework in which we represent customers by partial orders of preferences. In each store visit, each customer samples a full preference list of the products, consistent with her partial order, forms a consideration set, and then chooses to purchase the most preferred product among the considered ones. Our approach involves: (a) defining behavioral models to build consideration sets, (b) a clustering algorithm for determining customer segments in the market, and (c) the derivation of marginal distributions for general partial preferences under the multinomial logit (MNL) and the Mallows models. Numerical experiments on real-world panel data show that our approach allows more accurate, fine-grained predictions for individual purchase behavior compared to state-of-the-art existing methods.

Joint work with Srikanth Jagabathula, NYU

Friday, December 11th, 11:00 am – 12:00 pm

The Impact of Consumer Search Cost on Assortment Planning and Pricing

Speaker: Ruxian Wang, Carey Business School, Johns Hopkins University


Consumers search for product information to resolve valuation uncertainties before purchase. Under the consider-then-choose policy: in the first stage, a consumer forms her consideration set by balancing utility uncertainty and search cost; in the second stage, she evaluates all products in her consideration set and chooses the one with the highest net utility. The choice behavior within consideration sets is governed by the multinomial logit model. The revenue-ordered assortment fails to be optimal, although it can obtain at least half of the optimal revenue. We propose the k-quasi attractiveness-ordered assortment and show that it is arbitrarily near-optimal for a special case. The assortment problems are generally NP-hard, so we develop efficient exact and approximation algorithms for markets with homogeneous and heterogeneous consumers. For the joint assortment planning and pricing problem, we show that the intrinsic-utility-ordered assortment and the quasi-same-price policy that charges a same price for all products except at most one are optimal.

Friday, November 13th, 11:00 am – 12:00 pm

Carbon Market Dynamics and Supply Chain Management: Observations and Implications

Speaker: Ulku Gurler and Emre Berk, Bilkent University, Turkey


In this talk, we share our findings from a stream of funded research on supply chain management (SCM) decisions in the presence of carbon markets. We begin with stochastic and time series (fractional Brownian motion and ARMA-GARCH) modeling of carbon market prices based on the EU-ETS carbon emission allowances for the 2008-2012 time period. We discuss the impact of such market behavior on operational decisions in basic supply chain management models. In particular, we re-visit the newsvendor setting where the final product uses supplementary and/or complementary inputs generating a carbon footprint for the product and there are carbon emission caps. We study the fundamental stocking and input allocation decisions for a risk-neutral decision maker. Using this model as the building block, we, then, discuss (i) advance purchase contracts, and (ii) risk-averse decision makers. We illustrate our findings through a real-life example in agricultural production.

Friday, October 30th, 11:00 am – 12:00 pm

On a Generalized Signature of Repairable Coherent Systems

Speaker: FabrizioRuggeri, CNR-IMATI, Milano, Italy


We introduce the notion of signatures in repairable systems made of different components which can individually fail and be minimally repaired up to a fixed number of times. Failures occur according to Poisson processes, which might have either the same intensity function for each components or different ones. The former case reminds the notion of signature presented by Samaniego for i.i.d. random variables, whereas here independent Poisson processes with identical intensity function are considered. We provide also the reliability function for different systems, from series to coherent ones.

Joint work with Majid Chahkandi, Fabrizio Ruggeri and Alfonso Suarez-Llorens

Tuesday, October 27th, 11:00 am – 12:00 pm

Natural Hazards Modeling: From Storm Impacts to the Evolution of Regional Vulnerability Over Time

Speaker: Seth Guikema, University of Michigan, Department of Industrial and Operations Engineering


Hurricanes regularly impact communities and infrastructure systems along the U.S. coast, leading to substantial damage. Electric power systems are particularly heavily impacted in many storms. Many of the most vital systems and organizations in the U.S. are highly dependent on the functioning of the power system. A critical component of adequately preparing for and responding to these storms is having estimates of the magnitude and spatial distribution of the impacts prior to the event so that electric utilities, other power-dependent utilities, and government agencies can plan appropriately for their emergency response efforts for a given storm. In the longer term, climate change has the potential to substantially alter the hurricane environment and thus the risk to coastal communities and power systems. This poses substantial challenges in trying to achieve resilient and sustainable infrastructure and communities. This talk presents work done over the past 8 years to develop accurate power outage prediction models for hurricanes. The talk also summarizes recently published results that estimate the potential changes in hurricane risk to power systems under different future climate scenarios. This provides a basis for estimating which areas are most sensitive to changes in the hurricane environment to support planning for resilience and sustainability. The talk closes with an overview of an ongoing interdisciplinary effort led by Dr. Guikema to examine how the resilience and sustainability of a region evolves over time as an area experiences repeated exposure to hurricanes. This project uses an agent-based model to focus on the interplay between the hazard environment, individual behavior, and policy drivers in influencing the evolution of a community.

Friday, October 23rd, 11:00 am – 12:15 pm

An Empirical Analysis of the Sunk Cost Fallacy in Penny Auctions

Speaker: Chris Parker, Pennsylvania State University, Department of Supply Chain and Information Systems


The sunk cost fallacy is a widely known decision-making bias. We explore how the sunk cost fallacy impacts bidding behavior in penny auctions where individuals pay a non-recoverable bidding fee to increment the current price by a penny. To do this we utilize a large, proprietary dataset encompassing 1.2 billion bids from approximately 14 million people across 10.3 million auctions for nearly 150,000 products over a five year period. The detail of our dataset allows us to go further than previous research and distinguish between financial and psychological sunk costs. We find that the sunk cost fallacy is primarily driven by the financial side of previous bids but the psychological cost is also important. We also explore how intrinsic ability and learning mitigate the two sunk cost fallacy mechanisms. Finally, we explore bidder and product heterogeneity in the extent to which learning occurs.

Friday, October 16th, 11:00 am – 12:00 pm

On Information Quality (InfoQ)

Speaker: Ron S. Kenett, KPA Ltd.,Israel and University of Turin, Italy


In a 2014 paper in the Journal of the Royal Statistical Society (Series A), Kenett and Shmueli define the concept of Information Quality (InfoQ) as the potential of a dataset to achieve a specific (scientific or practical) goal using a given empirical analysis method. InfoQ is different from data quality and analysis quality, but is dependent on these components and on the relationship between them. Eight dimensions are used to assess InfoQ: 1) Data Resolution, 2) Data Structure, 3) Data Integration, 4) Temporal Relevance, 5) Generalizability, 6) Chronology of Data and Goal, 7) Operationalization and 8) Communication. The talk will discuss the application of InfoQ in various domains such as customer survey analysis, risk management, healthcare management, official statistics and reproducible research.

Thursday, October 1st, 11:00 am – 12:00 pm

Quantifying Uncertainties Using Expert Assessments in a Dynamic New Product Development Environment

Speaker: Saurabh Bansal, Pennsylvania State University, Department of Supply Chain and Information Systems


Based on a unique new product development problem at The Dow Chemical Company, a Fortune 100 firm, this paper develops an optimization-based approach to estimate the mean and standard deviations of yield distributions for producing new products when an expert provides judgmental estimates for quantiles of distributions. The approach estimates both the mean and standard deviation as weighted linear combinations of quantile judgments, where the weights are explicit functions of the expert’s judgmental errors. It is analytically tractable, and provides flexibility to elicit any set of quantiles from an expert. The approach also establishes that using an expert’s quantile judgments to deduce the distribution parameters is equivalent to collecting data with a specific sample size and enables combining the expert’s judgments with those of other experts. The theory has been in use at Dow for two years for making an annual decision worth $800 million. Results show that the judgments of the expert at Dow are equivalent to 5–6 years of data collection, and the use of our approach provides significant monetary savings annually as well as non-tangible benefits. We also discuss practical insights for seeking expert judgment for operational uncertainties in industrial situations.

Friday, September 18th, 11:00 am – 12:00 pm