I2SDS Past Seminars

Navigate our archive of past seminars by expanding the dropdown tabs below.

Spring 2019

Values of Cross-Product Demand Forecasting: Evidence in the Pharmaceutical Industry

Speaker: Anh Ninh, College of William & Mary

Abstract: We introduce a novel demand forecasting framework to facilitate learning across products using supply chain information. It seamlessly leverages modern machine learning algorithms and relevant domain knowledge to improve forecasting accuracy. The benefit of this approach is validated through extensive analysis using a dataset from a major pharmaceutical company.

Friday, May 10th, 11:00 am - 12:00 pm

From Data to Decisions in Healthcare using Optimization Modeling

Speaker: Sanjay Mehrotra, Northwestern University, Industrial Engineering and Management Sciences

Abstract: As management scientists we excel in model development and establishing properties of the models we develop. In this talk we will discuss a reversal of this order. We encounter issues in data-quality and quantity, and identify novel methodologies based on design of experiments and optimization for addressing them. We encounter problems with weight ambiguity in multi-objective optimization, which we address by introducing the concept of robust-Pareto and distributional robustness. The operational problems of resource assignment and scheduling we encounter are highly challenging. We study the structural properties of these problems to solve intractable two-stage mixed integer programs in a reasonable amount of time. The presentation will use case studies based on real data from an institutional enterprise electronic health data warehouse, real patient volumes from different hospital units, US national transplantation system, and a national survey conducted by CDC.

Friday, April 26th, 11:00 am -12:00 pm

Globally-Optimal Solution Algorithm for Security Constrained AC Optimal Power Flow in Electric Power Grids, Its Variations and Applications

Speaker: Masoud Barati, University of Pittsburgh, Department of Electrical and Computer Engineering

Abstract: Non-convex programming involves optimization problems where either the objective function or constraint set is a non-convex function. These kinds of problems arise in a broad range of applications in engineering systems. Despite the substantial literature on convex and non-convex quadratic programming (general classes of optimization problems), most available optimization techniques are either not scalable or work efficiently only for convex quadratic programming and do not provide adequate results for non-convex quadratic programming. This talk focuses on fundamental research on an integrated approach which the research team expects will lead to powerful solution methods for classes of non-convex programming problems. The new approach will be applicable for non-convex problems arising in many areas, such as power and energy systems, transportation, and communications. The general difficulty of power and energy optimization problems has a direct impact on power and energy systems management. This is one of the most fundamental concerns that must be dealt with in electrical power system management. The primary objective of this talk is to address the difficulty associated with problem non-convexity by developing high-performance optimization techniques that apply to a broad set of nonlinear energy problems, particularly the Optimal Power Flow (OPF) problem. There is a critical and urgent need for developing smart and robust OPF solvers. The conventional options currently available for DC-OPF are quite limited. I will fundamentally address AC Optimal Power Flow (AC-OPF) with active and reactive quadratically constrained quadratic programming optimization problems of a form that arises in operation and planning applications of the power system. Besides being non-convex, these problems are identified to be NP-hard. The proposed solution method is based on several basic and powerful optimization techniques in convex optimization theory such as DC decomposition approximation techniques, linear and global search procedures, bi-linear and convex relaxation, and alternate direction methods. Also, new schemes and theories must be introduced to establish the convergence of the algorithm and guarantee the global optimality of the solution results. Since the linear programming and convex optimization solvers are robust and fast, and also the power systems community is already familiar with linear and convex programs, the proposed algorithm will be beneficial and user-friendly for the AC-OPF problem. The theoretical investigations to examine the performance of the proposed algorithm and analyze its efficiency on existing testbed systems and synthetic data sets is pursued. In this talk, I will present my research on 1) A global optimal algorithm for AC-OPF Based on Successive Linear Optimization and DC-Decomposition, 2) Inner approximations of feasible spaces of AC-OPF problems. I will also introduce my research work activities on developing the theoretical aspects of above algorithm and its application in the cyber physical security and emergency management, power system planning, and power distribution network.

Friday, April 12th, 11:00 am -12:00 pm

Free Shipping and its Direct Causal Effect on Online Sales

Speaker: Taner Bilgic, Bogazici University, Department of Industrial Engineering,

Abstract: Sales through online marketplaces have been steadily increasing. One of the fundamental differences between traditional and online shopping is the shipping and handling (S&H) operations. Any consumer who uses online shopping starts the transaction with the knowledge that the item needs to be shipped to the consumer at an additional cost (of money and/or time). This fundamental difference seems to have a significant effect on the demand-price interaction for online shopping. We hypothesize that free shipping has two separate causal implications on online sales: via price and an independent effect. Using sales transactions from an online marketplace and an econometric approach, we establish that free shipping is endogenous in several model specifications we consider. Therefore, we use instrumental variables for free shipping and two stage least squares on panel data. Fixed effects methodology combined with two stage least squares, free us from the bias and inconsistency due to endogeneity. We also show that the causal results we obtain are consistently predicted by causal Bayesian machine learning techniques. Our results indicate that free shipping can increase sales and revenues significantly. This effect changes by product category. Free shipping increases the unit price of the product but decreases the total price paid by the customer in accordance with previous reports from the literature. Demand models for online sales should treat free shipping as endogenous to be more accurate. Category managers of sellers on online marketplaces should consider offering free shipping based on the characteristics of their products like volumetric weight and the strength of (price) competition.

Monday, April 8, 11:00 am –12:00 pm

Partial Sample Average Approximation Method for Chance Constrained Programs

Speaker: Jianqiang Cheng, University of Arizona, Department of Systems and Industrial Engineering

Abstract: In this talk, we present a new scheme of a sampling-based method, named Partial Sample Average Approximation (PSAA) method, to solve chance constrained programs. In contrast to Sample Average Approximation (SAA) which samples all of the random variables, PSAA only samples a portion of random variables by making use of the independence of some of the random variables for stepwise evaluation of the expectation. The main advantage of the proposed approach is that the PSAA approximation problem contains only continuous auxiliary variables, whilst the SAA reformulation contains binary ones. Moreover, we prove that the proposed approach has the same convergence properties as SAA. At the end, a numerical study on different applications shows the strengths of the proposed approach, in comparison with other popular approaches, such as SAA and scenario approach.

Friday, April 5th, 11:00 am - 12:00 pm

Uncertainty Quantification and Bayesian Model Calibration Applied to Stochastic Systems

Speaker: David Higdon, Virginia Tech, Department of Statistics

Abstract: Agent-based models (ABMs) use rules at the individual (agent) level to simulate a social, ecologic, or social-technical system, producing structured behavior when viewed at an aggregated level.  For example, dynamic network simulation models commonly evolve a very large collection of agents interacting over a network that evolves with time.  Such models are often used simulate animal populations, epidemics or transportation, typically producing random trajectories, even when model parameters and initial conditions are identical.  While Approximate Bayesian Computation has been used with such models to carry out statistical inference, an alternative is to consider the approaches commonly used in UQ and model calibration. Adapting to the inherent randomness in these simulations is necessary before applying the standard tools of UQ.  This talk shows some approaches for adapting Bayesian model calibration to these stochastic systems. We'll consider a case study of a recent epidemic, seeking to forecast the epidemic’s behavior given initial administrative information.

Friday, March 1, 11:00 am –12:00 pm

Fall 2018

Modelling of Large Insurance Claims and Occurrence Data

Speaker: Dipak K. Dey, University of Connecticut, Department of Statistics

Abstract: This presentation is based on analyzing big auto insurance claim data to improve spatial risk classification. We explore a spatial variant of the double generalized linear model (DGLM), in which Tweedie distribution, as a special case, is used to model the pure premium, and the spatial correlation is incorporated via graph Laplacian regularization. The estimated spatial effects are then used to generate risk rankings at the county level. Simulation results and real data analysis showcase the efficacy of the new methods. Besides our recent progress, the challenges we face in large-scale predictive modeling and our future directions will also be discussed. In particular, we focus on collision data and build models for each state in US separately.

Thursday, May 31st, 11:00 am –12:00 pm

Surge Pricing under Spatial Spillover: Evidence from Uber's Operations

Speaker: Nitin Joglekar, Questrom School of Business, Department of Operations and Technology Management 

Abstract: Ride-sharing platforms employ zone-specific surge pricing to match capacity with demand. Since the prices are spatially dispersed, labor capacity spills over (i.e., drivers move around) across zones.  We develop an optimization model to characterize the relationship between surge price and spillover. We then investigate how a platform accounts for surge pricing by estimating a spatial panel model on a dataset from Uber’s operation. Results reveal that Uber’s pricing policies account for both capacity spillover and price spillover across adjacent zones. There is a debate in the management community (Saif Benjaafar, Terry Taylor et al 2018) on the efficacy of labor welfare mechanisms associated with online shared capacity. We conduct counterfactual analysis to assess the effectiveness of alternative pricing policies using our parameter estimates. This analysis provides guidance in regards to the debate, for managing congestion, while accounting for consumer and labor welfare through this online platform.

Friday, April 6th, 11:00 am –12:00 pm

Optimal Contracts in Decentralized Projects (or Why Boeing Paid over $5B in Penalty Fees on the B787

Speaker: Ted Klastorin, University of Washington, Michael G. Foster School of Business, Department of Information Systems & Operations Management

Abstract: Managing decentralized projects effectively is a critical issue today as projects have become increasingly complex, costly, and strategically important (especially IT and product development projects).   In this talk, we analyze a decentralized project that is composed of several serial stages with stochastic durations; the project is planned, organized, and funded by a client organization that contracts the work at each stage to independent subcontractors.  Following both previous research and practice, we assume that the client and subcontractors incur both time and resource related costs as well as penalty/delay costs and/or bonus payments for early completion.  We develop a model for this project that analyzes a linear time-based incentive contract and indicates how the client should set due dates, penalty costs, and bonus payments to maximize her expected profit.  We will compare this contract to a non-linear time-based incentive contract that coordinates a decentralized project and show conditions when the two contracts are similar.  Finally, we will discuss implementation of these contracts and why the proper contract structure is so important.

Friday, March 16th, 11:00 am –12:00 pm

Dynamics of Homelessness in Urban America

Speaker: Christopher Glynn, University of New Hampshire, Department of Decision Sciences

Abstract: The relationship between housing costs and homelessness has important implications for the way that city and county governments respond to increasing homeless populations. Though many analyses in the public policy literature have examined inter-community variation in homelessness rates to identify causal mechanisms of homelessness, few studies have examined time-varying homeless counts within the same community. To examine trends in homeless population counts in the 25 largest U.S. metropolitan areas, we develop a dynamic Bayesian hierarchical model for time-varying homeless count data. Particular care is given to modeling uncertainty in the homeless count generating and measurement processes, and a critical distinction is made between the counted number of homeless and the true size of the homeless population. For each metro under study, we investigate the relationship between increases in the Zillow Rent Index and increases in the homeless population. Sensitivity of inference to potential improvements in the accuracy of point-in-time counts is explored, and evidence is presented that the inferred increase in the rate of homelessness from 2011-2016 depends on prior beliefs about the accuracy of homeless counts. A main finding of the study is that the relationship between homelessness and rental costs is strongest in New York, Los Angeles, Washington, D.C., and Seattle.

Friday, February 23rd, 11:00 am –12:00 pm

Deep Learning: A Bayesian Perspective

Speaker: Vadim Sokolov, George Mason University, Systems Engineering and Operations Research

Abstract: Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction. By taking a Bayesian probabilistic perspective, we provide a number of insights into more efficient algorithms for optimisation and hyper-parameter tuning. Traditional high-dimensional data reduction techniques, such as principal component analysis (PCA), partial least squares (PLS), reduced rank regression (RRR), projection pursuit regression (PPR) are all shown to be shallow learners. Their deep learning counterparts exploit multiple deep layers of data reduction which provide predictive performance gains. Stochastic gradient descent (SGD) training optimisation and Dropout (DO) regularization provide estimation and variable selection. Bayesian regularization is central to finding weights and connections in networks to optimize the predictive bias-variance trade-off. To illustrate our methodology, we provide an analysis of international bookings on Airbnb. Finally, we conclude with directions for future research.

Friday, February 16, 11:00 am – 12:00 pm

Bayesian Dynamic Regression Trees

Speaker: Simon Wilson, Trinity College Dublin & CMU 

Abstract: The dynamic Bayesian regression tree is a flexible regression model for sequential data that permits the relationship between the response and explanatory variables to evolve smoothly over time through a latent process. As such it is suited to tasks involving concept drift and active learning. This paper shows that exact sequential inference can be performed via implementation of the intermittent Kalman filter, permitting fast computation. Inference on the tree structure is done through an ensemble approach and an exact expression for the posterior weight of each tree in the ensemble can be derived. Extensions of this work will be discussed.

Friday, February 9, 11:00 am – 12:00 pm

An Approximation Approach for Markov Decision Processes with Application to Adaptive Clinical Trials

Speaker: Vishal Ahuja, Southern Methodist University, Cox School of Business

Abstract: Multi-armed bandit (MAB) problems, typically modeled as Markov decision processes (MDPs), exemplify the exploration vs. exploitation tradeoff. An area that has motivated theoretical research in MAB designs is the study of clinical trials, where the application of such designs has the potential to significantly improve patient outcomes and reduce drug development costs. However, for many practical problems of interest, the state space is intractably large, rendering exact approaches to solving MDPs impractical. In particular, settings with latency in observing outcomes that require multiple simultaneous randomizations, as in most practical clinical trials, lead to an expanded state and action-outcome space, necessitating the use of approximation approaches. Approximation methods make it computationally feasible to solve large-scale MDPs. In this study, we propose a novel approximation approach that combines the strengths of multiple methods: grid-based state discretization, value function approximation methods, and techniques for obtaining approximately optimal policies in a computationally efficient manner. The hallmark of our approach lies in the accurate approximation of the value function that combines linear interpolation with bounds on interpolated value and the addition of a learning component to the objective function. Computational analysis on relevant datasets shows that our approach outperforms existing heuristics (e.g. greedy and upper confidence bound family of algorithms) as well as a popular Lagrangian-based approximation method, where we find that the average regret improves by up to 58.3%. A retrospective implementation on a recently conducted phase 3 clinical trial shows that our design could have reduced the number of failures by 17% relative to the randomized control design used in that trial. Our proposed approach makes it practically feasible for trial administrators and regulators to implement Bayesian response-adaptive designs on large clinical trials with potential significant gains. More broadly, our approach offers managers and policymakers an implementable tool to derive approximately optimal solutions in settings where finding a fully optimal solution is not feasible.

Tuesday, February 6th, 11:00 am – 12:00 pm

Stein Discrepancy Methods for Robust Estimation and Regression

Speaker: Emre Barut, George Washington University, Department of Statistics

Abstract: All statistical procedures highly depend on the modeling assumptions and how close these assumptions are to reality. This dependence is critical: Even the slightest deviation from assumptions can cause major instabilities during statistical estimation.

In order to mitigate issues arising from model mismatch, numerous methods have been developed in the area of robust statistics. However, these approaches are aimed at specific problems, such as heavy tailed or correlated errors. The lack of a holistic framework in robust regression results in a major problem for the data practitioner. That is, in order to build a robust statistical model, possible issues in the data have to be found and understood before conducting the analysis. In addition, the practitioner needs to have an understanding of which robust models can be applied in which situations.

We propose a new framework for parameter estimation, which is given as the empirical minimizer of a second order U-statistic. The approach provides a "silver bullet" that can be used in a range of problems. When estimating parameters in the exponential family, the estimate can be obtained by solving a quadratic convex problem. For parameter estimation, our approach significantly improves upon MLE when outliers are present, or when the model is misspecified. Furthermore, we show how the new estimator can be used for robust high dimensional covariance estimation. Extensions of the method for regression problems and its efficient computation by subsampling are also discussed.

Friday, January 26, 11:00 am – 12:00 pm

Guessing Attacks and Their Performance

Speaker: Serdar Boztas, RMIT University, Melbourne

Abstract: The broad setting for this talk is the following question: How resistant is a secret, such as a password or a cryptographic key, against brute force and more sophisticated guessing attacks?

Traditionally, the Shannon entropy has been used as a measure of randomness. It turns out that for key guessing attacks, various Renyi entropies, including the so-called "Guessing entropy" are more relevant, especially in the practically relevant non-uniform distribution case, when passwords and keys are not generated by ideal distributions but chosen by finite complexity, real world processes.

We survey the results in this area over the last 15 years, culminating with an exposition of recent results on Distributed Oblivious Guessing, which looks at various attack scenarios under different constraints, and different performance measures such as (i) probability of success in guessing a secret in K tries, and (ii) expected number of guesses until success (iii) different attackers guessing in parallel.

Wednesday, January 31, 11:00 am – 12:00 pm

Spring 2018

Game of Variable Contribution to Common Good under Uncertainty

Speaker: Dharma Kwon, University of Illinois at Urbana-Champaign Gies College of Business, Department of Operations Management

Abstract: We consider a stochastic game of contribution to common good in which the players have continuous control over the degree of contribution, and we examine the gradualism arising from the free rider effect. This game belongs to the class of variable concession games which generalize wars of attrition. Previously known examples of variable concession games in the literature yield equilibria characterized by singular control strategies without any delay of concession. However, these no-delay equilibria are in contrast to mixed strategy equilibria of canonical wars of attrition in which each player delays concession by a randomized time. We find that a variable contribution game with a single state variable, which extends Nerlove-Arrow model, possesses an equilibrium characterized by regular control strategies that result in gradual concession. This equilibrium naturally generalizes the mixed strategy equilibria from the canonical wars of attrition. Stochasticity of the problem accentuates the qualitative difference between a singular control solution and a regular control equilibrium solution. We also find that asymmetry between the players can mitigate the inefficiency caused by the gradualism.

Friday, December 7th, 11:00 am –12:00 noon

Promoting Solar Panel Investments: Feed-in-Tariff versus Tax-Rebate Policies

Speaker: Safak Yucel, Georgetown University McDonough School of Business, Department of Operations and Information Management

Abstract: Governments have adopted various subsidy policies to promote investment in renewable energy sources, such as rooftop solar panels. The German government uses a feed-in-tariff policy that provides a guaranteed stream of payments for each unit of electricity generated by a household. In contrast, the U.S. government uses a tax-rebate policy that reduces the initial investment cost and the household receives the retail price for the generated electricity. In this paper, we study the key practical factors that favor one policy over the other from the perspective of the government. These factors are the heterogeneity in the generating efficiency, the variability in the electricity price, and the variability in the investment cost.

We consider an infinite-horizon, continuous-time model where the government moves first and announces either a feed-in tariff or tax rebate. Then, each household dynamically decides if and when to invest in a unit of solar panel. The objective of the government is to maximize the expected value of a subsidy policy, i.e., the difference between the societal benefit of solar panel investments and the subsidy cost over time.

We characterize the timing of the investment decision of the households and the optimal subsidy parameters for the government. We identify which operational factors favor the feed-in-tariff or the tax-rebate policy. Our results suggest that a government should prefer the feed-in-tariff policy when the electricity price is highly uncertain. Intuitively, feed-in-tariff policy eliminates the price variability; thus, it removes the strategic delay in the investment. The tax-rebate policy should be adopted if the households are heterogeneous in generating efficiency, if the investment cost is highly variable, or if the price and cost uncertainty are positively correlated.

This is a joint work with Vlad Babich (Georgetown University) and Ruben Lobel (AirBnB).

Friday, November 16th, 11:00 am–12:00 pm

Replication or Exploration: Sequential Design for Stochastic Simulation Experiments

Speaker: Robert B. Gramacy, Virginia Tech, Department of Statistics

Abstract: We investigate the merits of replication, and provide methods that search for optimal designs (including replicates), in the context of noisy computer simulation experiments. We first show that replication offers the potential to be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroskedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology.

Friday, November 2nd, 11:00 am –12:00 pm

The Effect of Flexibility in Delegating Innovation

Speaker: Karthik Ramachandran, Georgia Institute of Technology, Scheller College of Business, Department of Operations Management

Abstract: In many contexts such as product design and advertising, clients seek the expertise of external providers to generate innovative solutions for their business problems. In such delegated engagements, providers can improve the quality of solutions through the intensity of their efforts, and clients can evaluate solutions and decide when to stop the project. In this paper, we explore how the client’s flexibility in stopping the project influences the progress and efficiency of the delegated innovation. In particular, we compare two structures: Committed, where the client stops the project immediately if the provider delivers an acceptable solution, and an Open-ended, where the client retains the flexibility to continue the project even after receiving an acceptable solution. We show that, when innovation is delegated, the client’s flexibility can lead to lower early efforts by the provider and thus may not always benefit the client. We generate insights regarding the appropriateness of the two structures with respect to the problem difficulty and provider’s capability. In addition, we extend our model and analysis in several directions by capturing the effects of client’s transparency, project timeline, optimal payments, and provider’s capability improvement.

Friday, October 12th, 11:00 am –12:00 pm

Anchored Bayesian Gaussian Mixture Models

Speaker: Mario Peruggia, Ohio State University, Department of Statistics

Abstract: We describe a novel approach to the specification of Bayesian Gaussian mixture models that eliminates the label switching problem. Label switching refers to the invariance of the posterior distribution for the component-specific parameters to relabeling of the components when an exchangeable prior is used. There are two common approaches to address this issue. The first breaks the exchangeability assumption by imposing artificial constraints on some model parameters (or specifies some other informative prior). The second approach relabels the MCMC samples generated to estimate the exchangeable model in a way that favors one specific relabeling of the components. Our approach forces a small number of observations, which we call the anchor points, to arise from prespecified components of the mixture. Specifying the anchor points is tantamount to specifying an informative, data-dependent prior, in which some observations are assumed to arise from a given component with probability one. Using simulated and real data, we show that a careful choice of the anchor points can yield marginal posterior distributions for the component-specific parameters that are well separated and interpretable.

(Joint work with Deborah Kunkel)

Friday, October 5, 11:00 am –12:00 noon

Delayed Reporting and Correlated Failures in Multicomponent Systems

Speaker: Richard Arnold, Victoria University of Wellington, School of Mathematics and Statistics, New Zealand                 

Abstract: Multicomponent systems may experience failures with correlations amongst failure times of groups of components, and some subsets of components may experience common cause, simultaneous failures.  We present a novel, general approach to model construction and inference in multicomponent systems incorporating these correlations in an approach that is tractable even in very large systems.  In our formulation the system is viewed as being made up of Independent Overlapping Subsystems (IOS).  In these systems components are grouped together into overlapping subsystems, and further into non-overlapping subunits.  Each subsystem has an independent failure process, and each component's failure time is the time of the earliest failure in all of the subunits of which it is a part.

When subcritical failures occur in a system, the user may delay repair until a number of failures have accumulated.  In the case of warranty claims this results in multiple simultaneous claims. We present a simple stochastic model for the occurrence of these multiple reports.

(Joint work with Stefanka Chukova, Victoria University of Wellington, and Yu Hayakaw, Waseda University, Tokyo)

Friday, September 21, 11:00 am –12:00 noon

Learning Enabled Optimization

Speaker: Suvrajeet Sen, University of Southern California

Abstract: Traditionally, Stochastic Optimization deals with optimization models in which some of the data is modeled using random variables. In contrast, Learning Models are intended to capture the behavior of covariates, where the goal is to characterize the behavior of the response (random variable) to the predictors (random variables).  The field of Statistical (or Machine) Learning focuses on understanding these relationships.  The goal of this talk is to present a new class of composite optimization models in which the learning and optimization models live symbiotically.   We will discuss several examples of such problems, and how they give rise to a rich class of problems.  

This talk is based on the work of several Ph.D. students, and in particular Yunxiao Deng, Junyi Liu and Shuotao Diao

Monday, September 17th, 4:00 pm - 5:00 pm

Information Density in Sensitivity Analysis and Optimization

Speaker: Emanuele Borgonovo, Bocconi University, Department of Decision Sciences

Monday, August 6th, 11:00 am –12:00 pm

Fall 2017

Incentives and Competition in Innovation Contests with Public Submissions: Can “Star” Power Help or Hurt Competition?

Speaker: Cheryl Druehl, George Mason University, School of Business

AbstractInnovation contests are being increasingly used by firms to harness specialized skills of participants with diverse backgrounds for solving challenging business problems. We examine the interaction between incentives and the participation of top-level contestants (i.e., contestants that rank in the top 5% of winning experience among all contestants on the contest platform), where submissions and contestant identity are viewable by all contestants and the information structure changes during the contest. In such a format, contestants, in deciding how to participate, must weigh the costs of revealing their submissions against the benefits of improving their submissions through emerging information. We focus on two specific types of incentive mechanisms for participation, absolute incentives (prize amount) and relative incentives (prizes for other ongoing contests)Top-level contestants are likely to value these incentives differently from others, affecting how they view the tradeoffs associated with participation in an unblind contest, and specifically, when they choose to enter a contest by making their first submission. We also examine the implications of contestant visibility, specifically how the visible participation by top-level contestants in a contest influences others in participating and expending effort. The empirical analysis is carried out using a large dataset comprised of detailed data from 1,024 innovation contests and 453 unique contestants from a graphic design website, Logomyway.com, which specializes in logo-design contests. Our findings suggest that, in unblind contest environments where contestant identities are visible, incentive mechanisms play an unanticipated role—i.e., they constrain the contest landscape to a select group of top-level contestants, while disincentivizing others from entering the contest and expending effort.

Friday, December 1st, 11:00am –12:00 pm

Establishing Trust and Trustworthiness in Global Businesses

Speaker: Ozalp Ozer, University of Texas at Dallas, Jindal School of Management

Abstract: In this presentation, we will discuss when, how, and why the behavioral motives of trust and trustworthiness arise to support credible information sharing and cooperation within and across businesses. We identify four building blocks of trust and trustworthiness: personal values and norms, market environment, business infrastructure, and business process design. We will elaborate on these building blocks and offer tangible insights about how to establish trusting and cooperative business relationships. To do so, we will provide a high level summary of some research results and case studies from across industries.

Friday, November 17th, 11:00 am -12:15 pm

Dissecting Multiple Imputation from a Multi-phase Inference Perspective: What Happens When God's, Imputer's and Analyst's Models Are Uncongenial?

Speaker: Xiao-Li Meng, Harvard University, Department of Statistics

Abstract: Real-life data are almost never really real. By the time the data arrive at an investigator's desk or disk, the raw data, however defined, have most likely gone through at least one “cleaning'' process, such as standardization, re-calibration, imputation, or de-sensitization. Dealing with such a reality scientifically requires a more holistic multi-phase perspective than is permitted by the usual framework of “God's model versus my model.” This article provides an in-depth look, from this broader perspective, into multiple-imputation (MI) inference (Rubin (1987)) under uncongeniality (Meng (1994)). We present a general estimating-equation decomposition theorem, resulting in an analytic (asymptotic) description of MI inference as an integration of the knowledge of the imputer and the analyst, and establish a characterization of self-efficiency (Meng (1994)) for regulating estimation procedures. These results help to reveal how the quality of and relationship between the imputer's model and analyst's procedure affect MI inference, including how a seemingly perfect procedure under the “God-versus-me'' paradigm is actually inadmissible when God's, imputer's, and analyst's models are uncongenial to each other. Our theoretical investigation also leads to useful procedures that are as trivially implementable as Rubin's combining rules, yet with confidence coverage guaranteed to be minimally the nominal level, under any degree of uncongeniality. We reveal that the relationship is very complex between the validity of approaches taken for individual phases and the validity of the final multi-phase inference, and indeed that it is a nontrivial matter to quantify or even qualify the meaning of validity itself in such settings. These results and many open problems are presented to raise the general awareness that the multi-phase inference paradigm is an uncongenial forest populated by thorns, as well as some fruits, many of which are still low-hanging.

Friday, October 13th, 11:00 am – 12:00 pm

New Ideas in Auctions

Speaker: David Banks, Duke University, Department of Statistical Sciences

Abstract: Adversarial Risk Analysis allows one to pose a new class of auction problems for n >= 3 players.  Instead of the usual common knowledge assumption that enables the classic Bayes Nash equilibrium solution, it is perfectly reasonable to suppose that two opponents have different beliefs about the distribution of a third opponent's bid. This situation has not been solved within the standard game theory framework. This talk describes two ways to address that problem from an ARA perspective.

Friday, October 6th, 11:00 am – 12:00 noon

Goodness-of-Fit Tests for Univariate and Multivariate Distributions

Speaker: Baris Surucu, Middle East Technical University, Department of Statistics, Ankara, Turkey

Abstract: We discuss goodness-of-fit tests for univariate as well as multivariate distributions. New tests are proposed for a variety of univariate distributions.

Their extensions to multivariate structures are provided for complete and censoring samples. Simulation studies are given to compare efficiencies of these newly defined

test statistics with the existing tests. Some real examples are also provided to explain the subject.

Friday, September 22nd, 11:00 am – 12:00 noon

Spring 2017

Strategic Decisions in Batch-service Queueing Systems

Speaker: Olga Bountali, Southern Methodist University, Lyle School of Engineering

Abstract: Customers’ active decision making on a certain system of interest is proved to be very impactful for the operation of the system, compared to the traditional static/dynamic optimization directions. At the same time, batch service systems are quite often encountered in practice in service, supply chain, and healthcare applications. In order to bridge these topics of interest, we consider active (strategic) customers in a Markovian queue with batch services. We derive customer equilibrium strategies, regarding the joining/balking dilemma, considering several level of information provided upon arrival. We find that, in contrast to single-service models, a customer's decision to join induces both positive and negative externalities to other customers. This fact leads to an intricate mixture of Follow-The-Crowd and Avoid-The-Crowd behavior and possibly multiple equilibrium strategies. We also discuss the effects of the information and the batch size on the strategic behavior of the customers and on the overall social welfare.

Monday, May 15th, 3:30 pm–4:30 pm

Adversarial Risk Analysis

Speaker: Jorge Gonzalez-Ortega, Instituto de Ciencias Matematicas, Madrid, Spain

Abstract: Recent events such as terrorist attacks or economic crises have stressed the need for decision-makers to take into account the strategies of their opponents in conflicting situations. Economic competition, security conflicts, disease control and government regulation are all examples of areas in which multiple agents with different goals collide while pursuing their own interests. The classic approach in mathematics and economics to such problems has been game theory, yet a main drawback of this methodology is its underlying common knowledge assumption. Most versions of non-cooperative game theory assume that adversaries not only know their own payoffs, preferences, beliefs and possible actions, but also those of their opponents. When there is uncertainty in the game, we typically assume that players have common probabilities, as in games of incomplete information. These common knowledge assumptions allow for a symmetric joint normative analysis in which players maximize their expected utilities, and expect other players to do the same. Their decisions can then be anticipated and are predated by Nash equilibria and related concepts. However, in many contexts, including counter-terrorism or cybersecurity, players will not generally have such knowledge about their opponents. Adversarial Risk Analysis (ARA) provides a way forward, as common knowledge is not required. I will present the main concepts in ARA and different models that have been developed within this approach including some examples in security.

Friday, May 12h, 11:00 am–12:00 pm

Distributionally Robust Stochastic Optimization with Wasserstein Distance

Speaker: Anton Kleywegt, Georgia Institute of Technology, Milton Stewart School of Industrial & Systems Engineering

Abstract: Consider an optimization problem under uncertainty. One may consider formulating it as a stochastic optimization problem. Often in such a setting, a "true" probability distribution may not be known. In fact, often the notion of a true probability distribution may not even be applicable. We consider an approach, called distributionally robust stochastic optimization (DRSO), in which one hedges against all probability distributions in a chosen set. We point out that the popular sets based on phi-divergences such as Kullback-Leibler divergence have poor properties for some problems, and that sets based on Wasserstein distance have more appealing properties. Motivated by that observation we consider distributionally robust stochastic optimization problems that hedge against all probability distributions that are within a chosen Wasserstein distance from a nominal distribution, for example an empirical distribution resulting from available data. Such a choice of sets has two advantages: (1) The resulting distributions hedged against are more reasonable than those resulting from sets based on phi-divergences. (2) The problem of determining the worst-case expectation over the resulting set of distributions has desirable tractability properties. We derive a dual reformulation of the corresponding DRSO problem and construct worst-case distributions explicitly via the first-order optimality conditions of the dual problem. Our contributions are five-fold. (i) We identify necessary and sufficient conditions for the existence of a worst-case distribution, which are naturally related to the growth rate of the objective function. (ii) We show that the worst-case distributions resulting from an appropriate Wasserstein distance have a concise structure and a clear interpretation. (iii) Using this structure, we show that data-driven DRSO problems can be approximated to any accuracy by robust optimization problems, and thereby many DRSO problems become tractable by using tools from robust optimization. (iv) To the best of our knowledge, our proof of strong duality is the first constructive proof for DRSO problems, and we show that the constructive proof technique is also useful in other contexts. (v) Our strong duality result holds in a very general setting, and we show that it can be applied to infinite dimensional process control problems and worst-case value-at-risk analysis.

This is joint work with Rui Gao at Georgia Tech.

Friday, April 21st, 11:00 am -12:00 pm

Routing Unmanned Air Vehicles with Multiple Objectives

Speaker: Murat Köksalan, Middle East Technical University and Georgetown University

Abstract: Unmanned Air Vehicles (UAVs) are widely used for both military and civilian purposes.  Typically, the vehicle starts from a base, visits targets, and returns to the base.  Under a single objective, such as minimizing the distance traveled, the UAV routing can be modeled as a traveling salesperson problem. In this talk, I will briefly review multiple criteria decision making and address the UAV routing problem under two objectives. When there are two or more objectives, there can be many paths between target pairs and many possible tours using combinations of these paths that are of interest.  Many such solutions need to be considered because a solution may be superior to any other solution in some objective(s). Choosing a solution requires making tradeoffs between different objectives. We characterize discrete and continuous terrains, and the solutions that are potentially interesting. We develop approaches to aid route planners for choosing preferred solutions based on traveled distance and detection threat objectives.

Wednesday, April 11th, 11:00 am –12:00 noon

Show or Tell: A Tanzanian Mobile Money Field Experiment Measuring the Impacts of SMS Messages on Agent Performance

Speaker: Jason Acimovic, Pennsylvania State University, Smeal School of Business   

Abstract: Mobile money allows those in developing nations without access to traditional banks the ability to save, withdraw, deposit, and send money electronically through the wireless providers' networks using a mere feature phone. The interface between the electronic financial network and consumer is the mobile money agent.  Agents are their own bosses, and work on commission from the wireless providers. Thus, it is the agent who must decide how much physical and electronic cash to stock each day in order to fulfill customer demands for deposits and withdrawals.  We implement a randomized controlled trial at a Tanzanian operator to test the impact of daily SMS stocking suggestions and demand information on agents' stockout rates.  We use the newsvendor heuristic outlined in Balasubramanian et al. (2017) to generate daily recommendations customized for each agent as to how much electronic cash and physical cash to hold.  We also test the impact of information (50th and 90th percentile transaction volumes) as well as the impact of in-person training.  We show that overall, agents receiving daily SMS messages had a lower proportion of days-stocked-out by 80 basis points as compared to the control group.  We also investigate the impacts of heterogeneity as well as other insights derived from questionnaires collected at training sessions.

Friday, April 14th, 11:00am –12:00 pm

Modeling and optimization of cascading processes in networks

Speaker: Pavlo Krokhmal, University of Arizona

Abstract: We consider the problem of modeling and optimization of cascading, or “domino”-like processes in networks under the presences of uncertainties, such as propagation of marketing information, political or consumer preferences in social networks, spread of failures in engineering networks, and so on. To this end, we propose a general model that represents a cascading process in a network as a Markov process on a multiscale graph. Assuming that the parameters governing this Markov process can be modified at certain costs, we investigate the question of optimal resource allocation that would minimize the time by which the cascading process reaches all the nodes in the network. Under some natural assumptions, we derive analytical solutions for an optimal resource allocation that optimizes the spread of the cascade. These expressions explicitly elucidate the importance of network interactions in the context of the rate at which the cascade spreads. We show that an optimal resource allocation can be computed in a strongly polynomial time in terms of the size of the network. Importantly, the obtained solution is expressed via a minimum spanning arborescence on an auxiliary graph and provides a hierarchy that classifies the clusters of the network in terms of their importance with respect to the cascade propagation.

March 31st, 11:00 am - 12:00 pm

Bayesian Ensembles: When and Why To Extremize Binary-Event Forecasts?

Speaker: Casey Lichtendahl, University of Virginia, Darden School of Business

Abstract: Many firms face critical decisions that rely on forecasts of binary events -- events such as whether a borrower will default on a loan or not. In these situations, firms often gather forecasts from multiple experts or models. This raises the question of how to aggregate the forecasts. Because linear combinations of probability forecasts are known to be underconfident, we introduce a class of aggregation rules, called Bayesian ensembles, that are non-linear in the experts' probabilities. These ensembles are generalized additive models of experts' probabilities. These models have three key properties. They are coherent, i.e., consistent with the Bayesian view. They can aggregate calibrated or miscalibrated forecasts. And they tend to be more extreme, and therefore more confident, than the commonly used linear opinion pool. Empirically, we demonstrate that our ensemble can be easily fit to real data using a generalized linear model framework. We use this framework to aggregate several forecasts of loan defaults in the Fannie Mae single-family loan performance data. The forecasts come from several leading machine-learning algorithms. Our Bayesian ensemble offers an improvement out-of-sample over the linear opinion pool and over any one of the individual machine learning algorithms considered, two of which -- the random forest and extreme gradient boosted trees -- are already ensembles themselves.

Friday, March 17th, 11:00 am –12:00 pm

Vine Regression

Speaker: Roger M. Cooke, Resources for the Future

Abstract: The Regular vines or vine copula provide a rich class of multivariate densities with arbitrary one dimensional margins and Gaussian or non-Gaussian dependence structures. The density enables calculation of all conditional distributions, in particular, regression functions for any subset of variables conditional on any disjoint set of variables can be computed, either analytically or by simulation. Regular vines can be used to fit or smooth non-discrete multivariate data. The epicycles of regression - including/excluding covariates, interactions, higher order terms, multi collinearity, model fit, transformations, heteroscedasticity, bias, convergence, efficiency - are dispelled, and only the question of finding an adequate vine copula remains. This article illustrates vine regression with a data set from the National Longitudinal Study of Youth relating breastfeeding to IQ. Based on the Gaussian C-Vine, the expected effects of breastfeeding on IQ depend on IQ, on the baseline level of breastfeeding, on the duration of additional breastfeeding and on the values of other covariates. A child given 2 weeks breastfeeding can expect to increase his/her IQ by 1.5 to 2 IQ points by adding 10 weeks of Breastfeeding, depending on values of other covariates. Averaged over the NLSY data, 10 weeks additional breast feeding yields an expected gain in IQ of 0.726 IQ points. Such differentiated predictions cannot be obtained by regression models which are linear in the covariates.

Friday, March 10th, 11:00 am –12:00 pm

Information and Hazard Properties of Escort Models 

Speaker: Ehsan S. Soofi, University of Wisconsin-Milwaukee, Lubar School of Business

Abstract: The escort distribution is the normalized power of a probability density (mass) function used in nonextensive systems of statistical mechanics. The generalized escort distribution is the normalized product of powers of two density functions. Distributions of this form have appeared in statistics in various contexts with different names, such as Hellinger path in the context of Chernoff’s bound and power prior in Bayesian analysis. These distributions are derived as solution to different types of information theoretic formulations in physics and statistics. We first synthesize the existing information formulations for deriving escort distributions and introduce a maximum entropy formulation that provides solution as the normalized products of multiple distributions. Potential applications include the escort pool of likelihood models and expert opinions. We also explore the hazard properties of the generalized escort distribution. A notable property is that the escort pool of non-constant hazard distributions can have a constant hazard rate. The survival functions of the proportional hazard and the mixture hazard models are the escort and generalized escort of the survival functions. We show optimality of these models according to various information and the expected variation criteria.

Friday, February 24th, 11:00 am –12:00 pm

Manipulating Market Thickness via Listing Policies in Online B2B Markets

Speaker: Wedad Elmaghraby, University of Maryland, Robert H. Smith School of Business

Abstract: For big-box retailers, excess inventory is a recurring headache in the size of $424 billion a year. In the desire to move this excess inventory out of their stores as quickly and as profitability as possible, many retailers are turning to online B2B auction markets, selling their excess inventory to (business) resellers. We work with the one of the leading online market platforms enabling and supporting these online B2B auctions to better understand the levers that the auction site can use to help increase recovery rates on these auctions. Specifically, we study the resale of excess and returned consumer electronics merchandise from two big-box retailers who operate two independent auction sites (hosted by the same online market platform). Based on a natural field experiment across these two auction sites, we empirically identify that increasing the market thickness, i.e., the number of open auctions, by concentrating the auction ending times to certain days of week, has a significant positive effect on the auction final prices. This finding justifies the critical role of the listing policy in matching supply and demand in online B2B markets.

We devise a simple structural model that characterizes the equilibrium bidding behavior in an online B2B auction market, capturing bidder heterogeneity both with regards to their valuation for the products being sold as well as the size of their total demand (for the products). We characterize bidders' equilibrium strategies regarding market visit, auction selection, and bidding behavior. The revenue differences estimated by the structural model are consistent with the result of the natural experiment. We then consider alternative listing policies (counterfactuals) that suggest that there is a ‘sweet spot’ of market thickness relative to demand levels, and hence the auctioneer can use the listing policy, and implied market thickness, to balance the bidder participation and satiation of demand. (This is joint work with Kostas Bimpikis, Ken Moon, Wenchang Zhang)

Friday, February 17th, 11:00 am -12:00 pm

Sequential Bayesian Analysis of Multivariate Count Data

Speaker: Tevfik Aktekin, University of New Hampshire, Department of Decision Sciences

Abstract: We develop a new class of dynamic multivariate Poisson count models that allow for fast online updating. We refer to this class as multivariate Poisson-scaled beta (MPSB) models. The MPSB model allows for serial dependence in count data as well as dependence with a random common environment across time series. Notable features of our model are analytic forms for state propagation, predictive likelihood densities and sequential updating via sufficient statistics for the static model parameters. Our approach leads to a fully adapted particle learning algorithm and a new class of predictive likelihoods and marginal distributions which we refer to as the (dynamic) multivariate confluent hyper-geometric negative binomial distribution (MCHG-NB) and the dynamic multivariate negative binomial (DMNB) distribution, respectively. To illustrate our methodology, we use a simulation study and empirical data on weekly consumer non-durable goods demand.

Thursday, February 9th, 11:00 am –12:00 pm

Pricing under the Markov Chain Choice Model

Speaker: Huseyin Topaloglu, Cornell Tech, School of Operations Research and Information Engineering

Abstract: In this talk, we discuss pricing problems under the Markov chain choice model. In this choice model, a customer arrives into the system with an interest to purchase a particular product. After checking the price of this product, the customer decides whether to purchase the product. If she decides not to purchase this product, then the customer transitions to another product or to the no-purchase option according to a certain transition probability matrix. If the customer transitions to another product, then she checks the price of the next product. In this way, the customer transitions between the products until she purchases a product or transitions into the no-purchase option. We discuss four classes of pricing problems. First, we discuss static pricing problems, where the goal is to find the prices to charge for the products to maximize the expected revenue obtained from a customer. We show how to compute the optimal prices tractably. Second, we discuss dynamic pricing problems with a single resource, where we offer multiple products by using a single resource and the sale of a product consumes the inventory of the resource. We characterize structural properties of the optimal policy. Third, we discuss dynamic pricing problems over a network of resources, where we offer multiple products by using a network of resources and the sale of a product consumes a combination of resources. A standard fluid approximation to the problem is a non-convex program. We give an equivalent convex formulation. Lastly, we focus on competitive pricing problems under the Markov chain choice model. Lastly, we touch up on how to estimate the parameters of the Markov chain choice model. This is joint work with James Dong and Serdar Simsek.

February 10th, 11:00 am - 12:00 pm


Speaker: Chung-Li Tseng, University of New South Wales, School of Business

Abstract: Iowa intends to invest in cellulosic biofuel production to expand its renewable fuels consumption. The yield of main feedstock, corn stover, fluctuates due to weather uncertainty, especially the rainfall amount. Dual sourcing is a possible strategy to mitigate the supply uncertainty, which in our case is the option of growing dedicated energy crops as an additional supply source. A real options approach is used to analyze the optimal investment timing and benefits of the dual sourcing.

Friday, January 27th, 11:00am –12:00 pm

Bayesian Quality Control in the Big Data Era

Speaker: Fabrizio Ruggeri, CNR-IMATI, Milano 

Abstract: Quality control has been an activity which has long relied on a  relatively limited number of observations and, in a Bayesian framework, on expert opinions on a very restricted set of parameters. Availability of large amount data and the increased complexity of industrial processes at hand (involving a remarkable number of parameters) provide new challenges in bothmonitoring many processes at the same time and using experts' knowledge to assess priors on the parameters of interest. The talk will review the current literature in the field, discussing also directions for future research.

Thursday, January 19th, 11:30 am –12:30 pm

Fall 2016


Speaker: Jeremy Hutchison-Krupat, University of Virginia, Darden School of Business

Abstract: Senior leadership has two primary levers to influence a direct report: incentives and communication. Financial incentives are credible and precisely specified but offer limited flexibility. In contrast, communication is flexible but lacks precision, and must be deemed credible to affect a direct report’s actions. We study a setting where senior leadership seeks to add a new initiative to their organization’s portfolio. The initiative’s potential to create value is not initially well-understood. Senior leadership eventually obtains more precise information on the initiative’s value, and subsequently, may communicate this information to their direct report. We analyze senior leadership’s incentive and communication decisions, and ultimately their portfolio decision. We find senior leadership’s communication only affects a direct report’s actions when a new initiative’s potential to create value is sufficiently uncertain. Additionally, we find instances where an organization may benefit from communication that offers less specificity.

Friday, December 9th, 11:00am –12:00 pm

Dynamical Spatio-Temporal Processes for Modeling Geographic Expansion

Speaker: Ali Arab, Department of Mathematics and Statistics,Georgetown University.                  

Abstract: Geographic expansion is an important factor for business growth.  Statistical analysis of geographic expansion may be conducted through spatio-temporal dynamical processes, which are in turn often described by partial differential equations (PDEs).  The inherent complexity of spatio-temporal processes due to high-dimensionality and multiple scales of spatial and temporal variability is often intensified by characteristics such as sparsity of data, complicated geographical domains, among others. In addition, uncertainties in the appropriateness of any given PDE for a real-world process, as well as uncertainties in the parameters associated with the PDEs are typically present. These issues necessitate the incorporation of efficient parameterizations of spatio-temporal models that are capable of addressing such characteristics. In this talk, a hierarchical Bayesian model characterized by the PDE-based dynamics for a spatio-temporal diffusion process is presented and discussed.  As an example, a spatio-temporal model for binary data with application to geographic expansion of Walmart stores will be presented.

Friday, November 11th, 11:00 am –12:00 pm

Research and Teaching Opportunities in Project Management

Speaker: Nicholas Hall, The Ohio State University, Fisher College of Business

Abstract: One-fifth of the world's economic activity, with an annual value of $12 trillion, is organized using the business process of project management. This process has exhibited dramatic growth in business interest in recent years, with a greater than 1000% increase in Project Management Institute membership since 1996. Contributing to this growth are many new applications of project management. These include IT implementations, research and development, software development, corporate change management, and new product and service development. However, the very different characteristics of these modern projects present new challenges. The partial resolution of these challenges within project management practice over the last 20 years defines numerous interesting opportunities for academic researchers. These research opportunities make use of a remarkably broad range of methodologies, including robust optimization, cooperative and noncooperative game theory, nonlinear optimization, predictive analytics, empirical studies, and behavioral modeling. Furthermore, the $4.5 trillion that is annually at risk from a shortage of skilled project managers, and the 15.7 million new jobs in project management expected by 2020, provide great opportunities for contributions to project management education. These educational opportunities include the integration of case studies, analytics challenges, online simulations, in-class games, self-assessment exercises, videos, and guest speaker presentations, which together form an appealing course for both business and engineering schools. This work will be published and presented as a Tutorial at INFORMS Annual Meeting in Nashville, November 2016.

Friday, October 28th, 11:00am –12:00 pm

Stochastic Optimization Using Hellinger Distance

Speaker: Jie Xu, George Mason University

Abstract: Stochastic optimization facilitates decision making in uncertain environments. In typical problems, probability distribution models are fit to historical data for the chance variables and then optimization is carried out, as if the estimated probability distributions are the “truth”. However, this perspective is optimistic in nature and can frequently lead to sub-optimal or infeasible results because the distribution can be mis-specified and the historical data set may be contaminated. In this paper, we propose to integrate existing approaches to decision making under uncertainty with robust and efficient estimation procedures using Hellinger distance. Within the existing decision-making methodologies that make use of parametric models, our approach offers robustness against model mis-specifications and data contamination. Additionally, it also facilitates quantification of the impact of uncertainty in historical data on optimization results.

Friday, October 21, 11:00 am – 12 pm


Speaker: Anant Mishra, George Mason University, School of Business

Abstract: With supply chains now extending into developing countries, time and again, working conditions in supplier factories have been found to be unsafe. In this study, we focus on factories in the Bangladesh ready-made garment (RMG) industry that supply to North American and European retailers. These retailers have adopted an innovative approach to improving the working conditions of supplier factories by forming consortiums. The consortium of North American retailers is the Alliance for Bangladesh Worker Safety (Alliance). The consortium of European retailers is the Accord on Fire and Building Safety in Bangladesh (Accord). The central question addressed in this study is: How do working conditions in a supplier factory impact its trustworthiness as seen from a buyer’s perspective? We characterize supplier factory working conditions in terms of three types of risks, namely, structural risk, fire risk, and electrical risk. Next, we examine the implications of each type of risk for supplier trustworthiness measured as the number of buyers contracting with the supplier factory. The empirical analysis is conducted using detailed archival data on safety inspection reports from Alliance and Accord. The results support the contention that buyers are sensitive to working condition risks in a supplier factory, i.e., as working condition risks in a supplier factory increase, its trustworthiness decreases; however, this relationship varies with the type of the risk. Specifically, among the three types of risks, fire and electrical risks are associated with decreased supplier trustworthiness, while structural risk has a marginal effect. Further, the negative relationship between working condition risks and supplier trustworthiness is moderated by the size of the supplier factory. That is, for a given level of risk, buyers perceive larger supplier factories to be more trustworthy, expecting them to take corrective actions toward improving working conditions, compared to smaller factories. The above findings highlight the marketplace implications of working condition risks in supplier factories and suggest that the competitiveness of a supplier factory in a developing country is inversely related to the level of working condition risks in the factory.

Friday, October 14th, 11:00am –12:00 noon

Recent Advances in Chance-constrained Stochastic Programs

Speaker: Yongjia Song, Virginia Commonwealth University, Department of Statistical Sciences and Operations Research

Abstract: In this talk, we first briefly review the background of chance-constrained stochastic programming (CCSP) as well as the state-of-the-art solution methodology for CCSPs. We will then focus on a recently proposed solution method based on the Lagrangian duals for CCSPs, where the nonanticipativity constraints are relaxed. We compare the strength of the proposed dual bounds and demonstrate that they are superior to the bound obtained from the continuous relaxation of a standard mixed-integer programming (MIP) formulation. We also derive two new MIP formulations for CCSPs, and demonstrate that for chance-constrained linear programs, the continuous relaxations of these formulations yield bounds equal to the proposed dual bounds. Promising computational results indicate the superiority of the proposed methods.

Friday, September 23, 11:15 am - 12:15 pm

A Dynamic Clustering Approach to Data-Driven Assortment Personalization

Speaker: Fernando Bernstein, Duke University, Fuqua School of Business

Abstract: We consider a retailer facing heterogeneous customers with initially unknown product preferences. Customers are characterized by a diverse set of demographic and transactional attributes. The retailer can personalize the assortment offerings based on the available customers’ profile information to maximize cumulative revenue. To that end, the retailer must estimate customer preferences by observing transaction data. This, however, may require a considerable amount of information given the broad range of customer profiles and large number of products available. At the same time, the retailer can aggregate (pool) purchasing information among customers with similar product preferences. For a simplified version of the problem, we analytically characterize settings in which pooling transaction information is beneficial for the retailer. We also show that there are economies of scale in learning in the sense that there are diminishing marginal returns to pooling information from an increasing number of customers. We next propose a dynamic clustering policy that adaptively adjusts customer segments (clusters of customers with similar preferences) and estimates customer preferences as more transaction information becomes available. We conduct an extensive numerical study to examine the benefit of pooling transaction data and personalizing assortment offerings by adopting the dynamic clustering policy. The study suggests that the benefits of dynamic clustering -- over an “oblivious” policy that ignores profile information and treats all customers the same or over a “data-intensive” policy that treats customers independently -- can be substantial.

Friday, September 16th, 11:00 am -12:00 pm

Spring 2016

More information coming soon!

Fall 2015

A Model to Estimate Individual Preferences Using Panel Data

Speaker: Gustavo Vulcano, Stern School of Business, New York University


In a retail operation, customer choices may be affected by stock out and promotion events. Given panel data with the transaction history of each customer, our goal is to predict future purchases. We use a general nonparametric framework in which we represent customers by partial orders of preferences. In each store visit, each customer samples a full preference list of the products, consistent with her partial order, forms a consideration set, and then chooses to purchase the most preferred product among the considered ones. Our approach involves: (a) defining behavioral models to build consideration sets, (b) a clustering algorithm for determining customer segments in the market, and (c) the derivation of marginal distributions for general partial preferences under the multinomial logit (MNL) and the Mallows models. Numerical experiments on real-world panel data show that our approach allows more accurate, fine-grained predictions for individual purchase behavior compared to state-of-the-art existing methods.

Joint work with Srikanth Jagabathula, NYU

Friday, December 11th, 11:00 am – 12:00 pm

The Impact of Consumer Search Cost on Assortment Planning and Pricing

Speaker: Ruxian Wang, Carey Business School, Johns Hopkins University


Consumers search for product information to resolve valuation uncertainties before purchase. Under the consider-then-choose policy: in the first stage, a consumer forms her consideration set by balancing utility uncertainty and search cost; in the second stage, she evaluates all products in her consideration set and chooses the one with the highest net utility. The choice behavior within consideration sets is governed by the multinomial logit model. The revenue-ordered assortment fails to be optimal, although it can obtain at least half of the optimal revenue. We propose the k-quasi attractiveness-ordered assortment and show that it is arbitrarily near-optimal for a special case. The assortment problems are generally NP-hard, so we develop efficient exact and approximation algorithms for markets with homogeneous and heterogeneous consumers. For the joint assortment planning and pricing problem, we show that the intrinsic-utility-ordered assortment and the quasi-same-price policy that charges a same price for all products except at most one are optimal.

Friday, November 13th, 11:00 am – 12:00 pm

Carbon Market Dynamics and Supply Chain Management: Observations and Implications

Speaker: Ulku Gurler and Emre Berk, Bilkent University, Turkey


In this talk, we share our findings from a stream of funded research on supply chain management (SCM) decisions in the presence of carbon markets. We begin with stochastic and time series (fractional Brownian motion and ARMA-GARCH) modeling of carbon market prices based on the EU-ETS carbon emission allowances for the 2008-2012 time period. We discuss the impact of such market behavior on operational decisions in basic supply chain management models. In particular, we re-visit the newsvendor setting where the final product uses supplementary and/or complementary inputs generating a carbon footprint for the product and there are carbon emission caps. We study the fundamental stocking and input allocation decisions for a risk-neutral decision maker. Using this model as the building block, we, then, discuss (i) advance purchase contracts, and (ii) risk-averse decision makers. We illustrate our findings through a real-life example in agricultural production.

Friday, October 30th, 11:00 am – 12:00 pm

On a Generalized Signature of Repairable Coherent Systems

Speaker: FabrizioRuggeri, CNR-IMATI, Milano, Italy


We introduce the notion of signatures in repairable systems made of different components which can individually fail and be minimally repaired up to a fixed number of times. Failures occur according to Poisson processes, which might have either the same intensity function for each components or different ones. The former case reminds the notion of signature presented by Samaniego for i.i.d. random variables, whereas here independent Poisson processes with identical intensity function are considered. We provide also the reliability function for different systems, from series to coherent ones.

Joint work with Majid Chahkandi, Fabrizio Ruggeri and Alfonso Suarez-Llorens

Tuesday, October 27th, 11:00 am – 12:00 pm

Natural Hazards Modeling: From Storm Impacts to the Evolution of Regional Vulnerability Over Time

Speaker: Seth Guikema, University of Michigan, Department of Industrial and Operations Engineering


Hurricanes regularly impact communities and infrastructure systems along the U.S. coast, leading to substantial damage. Electric power systems are particularly heavily impacted in many storms. Many of the most vital systems and organizations in the U.S. are highly dependent on the functioning of the power system. A critical component of adequately preparing for and responding to these storms is having estimates of the magnitude and spatial distribution of the impacts prior to the event so that electric utilities, other power-dependent utilities, and government agencies can plan appropriately for their emergency response efforts for a given storm. In the longer term, climate change has the potential to substantially alter the hurricane environment and thus the risk to coastal communities and power systems. This poses substantial challenges in trying to achieve resilient and sustainable infrastructure and communities. This talk presents work done over the past 8 years to develop accurate power outage prediction models for hurricanes. The talk also summarizes recently published results that estimate the potential changes in hurricane risk to power systems under different future climate scenarios. This provides a basis for estimating which areas are most sensitive to changes in the hurricane environment to support planning for resilience and sustainability. The talk closes with an overview of an ongoing interdisciplinary effort led by Dr. Guikema to examine how the resilience and sustainability of a region evolves over time as an area experiences repeated exposure to hurricanes. This project uses an agent-based model to focus on the interplay between the hazard environment, individual behavior, and policy drivers in influencing the evolution of a community.

Friday, October 23rd, 11:00 am – 12:15 pm

An Empirical Analysis of the Sunk Cost Fallacy in Penny Auctions

Speaker: Chris Parker, Pennsylvania State University, Department of Supply Chain and Information Systems


The sunk cost fallacy is a widely known decision-making bias. We explore how the sunk cost fallacy impacts bidding behavior in penny auctions where individuals pay a non-recoverable bidding fee to increment the current price by a penny. To do this we utilize a large, proprietary dataset encompassing 1.2 billion bids from approximately 14 million people across 10.3 million auctions for nearly 150,000 products over a five year period. The detail of our dataset allows us to go further than previous research and distinguish between financial and psychological sunk costs. We find that the sunk cost fallacy is primarily driven by the financial side of previous bids but the psychological cost is also important. We also explore how intrinsic ability and learning mitigate the two sunk cost fallacy mechanisms. Finally, we explore bidder and product heterogeneity in the extent to which learning occurs.

Friday, October 16th, 11:00 am – 12:00 pm

On Information Quality (InfoQ)

Speaker: Ron S. Kenett, KPA Ltd.,Israel and University of Turin, Italy


In a 2014 paper in the Journal of the Royal Statistical Society (Series A), Kenett and Shmueli define the concept of Information Quality (InfoQ) as the potential of a dataset to achieve a specific (scientific or practical) goal using a given empirical analysis method. InfoQ is different from data quality and analysis quality, but is dependent on these components and on the relationship between them. Eight dimensions are used to assess InfoQ: 1) Data Resolution, 2) Data Structure, 3) Data Integration, 4) Temporal Relevance, 5) Generalizability, 6) Chronology of Data and Goal, 7) Operationalization and 8) Communication. The talk will discuss the application of InfoQ in various domains such as customer survey analysis, risk management, healthcare management, official statistics and reproducible research.

Thursday, October 1st, 11:00 am – 12:00 pm

Quantifying Uncertainties Using Expert Assessments in a Dynamic New Product Development Environment

Speaker: Saurabh Bansal, Pennsylvania State University, Department of Supply Chain and Information Systems


Based on a unique new product development problem at The Dow Chemical Company, a Fortune 100 firm, this paper develops an optimization-based approach to estimate the mean and standard deviations of yield distributions for producing new products when an expert provides judgmental estimates for quantiles of distributions. The approach estimates both the mean and standard deviation as weighted linear combinations of quantile judgments, where the weights are explicit functions of the expert’s judgmental errors. It is analytically tractable, and provides flexibility to elicit any set of quantiles from an expert. The approach also establishes that using an expert’s quantile judgments to deduce the distribution parameters is equivalent to collecting data with a specific sample size and enables combining the expert’s judgments with those of other experts. The theory has been in use at Dow for two years for making an annual decision worth $800 million. Results show that the judgments of the expert at Dow are equivalent to 5–6 years of data collection, and the use of our approach provides significant monetary savings annually as well as non-tangible benefits. We also discuss practical insights for seeking expert judgment for operational uncertainties in industrial situations.

Friday, September 18th, 11:00 am – 12:00 pm

Spring 2015

Risk, Importance and Value of Information

Speaker: Emanuele Borgonovo, Bocconi University, Department of Decision Sciences

Decisions involving significant budgetary and public consequences expose decision makers to complex cognitive tasks. The decision making process can be simplified by the specification of an acceptable level of risk. Once a decision has been made, the knowledge of events whose occurrence significantly impacts the baseline level of risk is gathered through risk importance measures. However, risk importance measures fail to convey information before a decision is made. We introduce a value of information approach that bridges this gap leading to a new importance measure. Our results establish an explicit link between risk metrics, risk importance measures and acceptable risk targets. We obtain analytically the expression of value of information as a function of the acceptable risk and of the probability of evidence, specifying the regions where it is increasing and decreasing. This result adds to previous literature on the dependence between value of information and its determinants. The new importance measure presents several advantages: it does not impose additional computational burden, it is computed without specifying the decision maker utility function, it makes importance measures, for the first time, usable also in a pre-decision setting, augmenting the palette of tools available in a relevant class of complex decision analysis problems. A realistic application illustrates the managerial insights.
Joint work with Alessandra Cillo, Bocconi University, Department of Decision Sciences and IGIER
Thursday, July 16th, 11:30 am – 12:30 pm

Control of a Fleet of Vehicles to Collect Uncertain Information in a Threat Environment

Speaker: Rajan Batta, State University of New York at Buffalo, Department of Industrial & Systems Engineering


We study the problem of controlling a fleet of vehicles to search and collect information reward within a specified mission time from a set of regions characterized by uncertain reward and a threat environment. We seek a decentralized time-allocation policy using pre-calculated routes to maximize the total reward. We demonstrate that sharing regions among vehicles is beneficial. However, shared regions make the decentralized time-allocation problem computationally intractable. To overcome this, we develop an approximate formulation using an independency assumption. This approximate model allows us to decompose, by vehicle, the time-allocation problem, and obtain an easily implementable policy that takes on a Markovian form. We derive a tight upper bound for the decentralized time-allocation policy using the obtained Markovian policy. We also develop a sufficient condition under which the approximate formulation becomes exact. A numerical study establishes the computational efficiency of the method, where only a few CPU seconds are needed for problems with a planning horizon of 300 time units and 40 regions, and demonstrates the benefit of using a region-sharing strategy. The numerical study also examines the fleet’s workload sharing behavior with respect to the cooperation factor (which measures the fused information reward gained from sharing), the mission duration and the search sequence.
Wednesday, May 6th, 5:00 pm – 6:00 pm

The Illusion of a Portfolio “Rebalancing Bonus”

Speaker: Michael Edesess, City University of Hong Kong, SEEM Department


The vast majority of financial advisors who advise individual and institutional clients on their investments recommend that the clients periodically “rebalance” their portfolios to restore their asset allocation after market movements cause it to drift. But careful investigation can find no evidence that the practice either enhances expected return or helps to control or reduce risk. Beneath the flawed reasoning that long advocated that rebalancing added a “bonus” lie three interesting mathematical inequalities. These can be shown to a high level of certainty using computer simulations to be true, but they appear not to have been mathematically proven as yet.
Tuesday, April 28th, 11:00 am – 12:00 pm

Modeling Durations using Estimating Functions

Speaker: Nalini Ravishanker, Department of Statistics, University of Connecticut, Storrs


Accurate modeling of patterns in inter-event durations is important in several applications because patterns in elapsed times between events contain valuable information. Since the Autoregressive Conditional Duration (ACD) model was first proposed, several classes of duration models have been studied in the literature. Developing fast and accurate methods for estimation based on long duration series is still an ongoing research problem. The framework of martingale estimating functions (Godambe, 1985) provides an optimal approach for developing inference for linear and nonlinear time series based on information on the first two conditional moments of the observed process. In situations where information about higher-order conditional moments of the process is also available, combined (linear and quadratic) estimating functions are more informative. This talk describes the approach in the context of nonlinear durations modeling. Recursive equations based on the nonlinear estimating functions and which permit fast, online estimation of parameters with large data sets are derived. Since the accuracy of the solutions to the recursive formulas benefits immensely from good starting values of the parameters, an approach for determining such starting values is proposed, and demonstrated in the context of the Log ACD models that are popular for durations modeling. A simulation study and an example of inter-event durations for IBM stock prices are used to illustrate the approach. Extensions to other classes of nonlinear time series models is also discussed.
This is joint work with A. Thavaneswaran, University of Manitoba.
Friday, April 24th, 11:00 am – 12:00 pm

Consumer Behavior, Revenue Management, and the Design of Loyalty Programs

Speaker: So Yeon Chun, Georgetown University, McDonough School of Business


While originally viewed as marketing efforts, consumer reward loyalty programs have grown substantially in size and scope during the last two decades, to the extent that they now significantly interact with other firm functions, including operations, accounting and finance.
In the first part of the talk, we consider a question regarding the design of loyalty programs, which has been at the forefront of recent changes in the airline industry: should frequent-flyer status be awarded based on the money spent or miles flown? We present a model for strategic consumers’ decision and endogenously derive the demand as a function of prices, loyalty program design, and premium status qualification requirements. We then discuss firm’s optimal pricing and design decisions, and provide managerial implications. [Based on joint work with Anton Ovchinnikov (Queen’s)]

In the second part of the talk, we consider a long-term dynamic management of loyalty programs, and study the problem of optimally setting the cash prices and point requirements (point prices). We develop a model that captures complex interactions between loyalty programs on several firm functions, such as the effect of loyalty programs on sales revenues, rewards redemptions, servicing costs, and earnings. We then discuss the structure of optimal policy, and provide managerial insights and prescriptive recommendations.
Based on joint work with Dan Iancu (Stanford) and Nikolaos Trichakis (HBS).
Friday, April 17th, 11:00 am – 12:15 pm

Project Management Decisions with Uncertain Targets

Speaker: Jeffrey M. Keisler, College of Management, University of Massachusetts Boston


Sophisticated quantitative techniques for project management, notably PERT/CPM, attempt to minimize the risk that the project will fail to meet fixed requirements. But requirements themselves often vary, necessitating qualitative techniques to get or keep projects on track. This new work replaces the assumption of fixed requirements with new assumptions that allow for (1) a fully decision analytic treatment of project management decision making under uncertainty that (2) can be easily incorporated into existing project management techniques.
Friday, April 10th, 11:00 am – 12:00 pm

A Flexible Observed Factor Model with Separate Dynamics for the Factor Volatilities and Their Correlation Matrix

Speaker: Sujit K. Ghosh, NC State University & SAMSI


In this article, we consider a novel regression model with observed factors. To allow for the prediction of future observations, we model the observed factors using a flexible multivariate stochastic volatility (MSV) structure with separate dynamics for the volatilities and the correlation matrix. The correlation matrix of the factors is time varying, and its evolution is described by an inverse Wishart process. We develop an estimation procedure based on Bayesian Markov chain Monte Carlo methods, which has two major advantages compared to existing methods for similar models in the literature. First, the procedure is computationally more efficient. Second, it can be applied to calculate the predictive distributions for future observations. We compare the proposed model with other multivariate volatility models using Fama-French factors and portfolio weighted return data. The result shows that our model has better predictive performance.
Friday, April 3rd, 11:10 am – 12:00 pm

Advice Overextension: How and when do people provide advice?

Speaker: Robin Dillon-Merrill, Georgetown University, McDonough School of Business


Many decision models rely on judgments from subject-matter experts (SMEs). The Department of Homeland Security has a model that requires over a thousand probability assessments from experts on topics ranging from weapon types, to border and transportation issues. In 2010, 7 SMEs from the Intelligence Community provided all of the required assessments to complete the model. While there is little doubt these 7 SMEs had knowledge depth in their fields of expertise, it is almost certain that some “extension” beyond their expertise base occurred. Much research has studied advice taking from experts, but relatively little is known about why people provide advice when they are not an expert on the subject. I will discuss several different behavioral laboratory studies that we have recently conducted that demonstrate that people are too willing to provide advice often extending beyond their expertise. I will also discuss the use of the helping power motivation scale (Frieze and Boneva, 2001) as a possible explanatory mechanism.
Friday, February 20th,11:00 am – 12:00 noon

The Potential of Servicizing as a Green Business Model

Speaker: Vishal Agrawal, Georgetown University, McDonough School of Business


It has been argued that servicizing business models, under which a firm sells the functionality of a product rather than the product itself, are environmentally beneficial. The main arguments are: First, under servicizing the firm charges customers based on the product usage. Second, the quantity of products required to meet customer needs may be smaller because the firm may be able to pool customer needs. Third, the firm may also have an incentive to offer products with higher efficiency. In this paper, we investigate the economic and environmental potential of servicizing business models. We endogenize the firm’s choice between a pure sales, a pure servicizing, or a hybrid model with both sales and servicizing options, the pricing decisions and, the resulting customer usage. We consider two extremes of pooling efficacy, viz., weak versus strong pooling. We find that under weak pooling servicizing leads to higher production impact but lower use impact. In contrast, under strong pooling when a hybrid business model is more profitable, it is also environmentally superior. However, a pure servicizing model may be environmentally inferior because it may not only lead to higher use impact but also a larger quantity of products even under strong pooling. We also examine the firm’s efficiency choice and find that, contrary to conventional wisdom, under servicizing the firm does not always offer higher efficiency products. Furthermore, we show that while under sales a more efficient product leads to higher customer usage, under servicizing it may actually lead to lower usage.
Friday, February 6th,11:00 am – 12:00 noon

Don’t Count on Poisson! Introducing the Conway-Maxwell-Poisson Distribution to Model Count Data

Speaker: Kimberly Sellers, Georgetown University, Department of Mathematics


Count data have become widely pervasive in various applied fields requiring data collection, including surveys, environmental studies, disease surveillance, and genetic studies. Classical statistical methods surrounding count data center around the Poisson distribution and associated methodologies, whose assumption is that the mean and variance equal. Real data, however, violate this basic principle in that the dataset displays some form of dispersion. The Conway-Maxwell-Poisson (COM-Poisson) distribution is a flexible alternative for count data that not only contains three classical distributions as special cases, but can more broadly accommodate either over- and under-dispersion. As a result, it has served as a motivating distribution for generalizing many classical statistical methods to allow for dispersion, including regression analysis, control chart theory, and stochastic processes. This talk will highlight some of these areas, and demonstrate their use in various applications.
Friday, January 30th, 11:15 am –12:15 pm

Fall 2014

Statistical Models for Improving Prognosis of Heart Failure: Hazard Reconstruction, Clustering and Prediction of Disease Progression

Speaker: Francesca Ieva, University of Milano, Department of Mathematics


Heart Failure (HF) is nowadays among the leading causes of repeated hospitalisations in over 65 patients. The longitudinal dataset resulting from the discharge papers and its analysis are consequently becoming of a great interest for clinicians and statisticians worldwide in order to have insights of the burden of such an extensive disease. We analysed HF data collected from the administrative databank of an Italian regional district (Lombardia), concentrating our study on the days elapsed from one admission to the next one for each patient in our dataset. The aim behind this project is to identify groups of patients, conjecturing that the variables in our study, the time segments between two consecutive hospitalisations, are Weibull differently distributed within each hidden cluster. Therefore, the comprehensive distribution for each variable is modeled by a Weibull Mixture. From this assumption we developed a survival analysis in order to estimate, through a proportional hazards model, the corresponding hazard function for the proposed model and to obtain jointly the desired clusters. We find that the selected dataset, a good representative of the complete population, can be categorized into three clusters, corresponding to “healthy”, “sick” and “terminally ill” patients. Furthermore, we attempt a reconstruction of the patient-specific hazard function, adding a frailty parameter to the considered model.
Friday, November 14th, 11:00 am –12:00 noon

Software testing analytics: Examples from the evaluation of an independent software testing organization and the design of dynamic web testing

Speaker: Ron S. Kenett, KPA Ltd.,Israel


Testing software is an activity required by customers and dictated by economic considerations. The talk will consist of two parts related to software testing. In the first part, the talk will present an assessment of the effectiveness of an independent testing organization in terms of Escaping Defects, which are defects that were missed by the testing team and were detected by the users after the code was deployed. The talk will focus on the analysis of test data using the COM-Poisson regression model that can handle under-dispersion and over-dispersion relative to the Poisson model. Such data is common in testing and the application of Poisson or Negative Binomial models is not flexible enough to fit the data.. In the second part, testing of dynamic web services is considered in the context of risk based group testing. The approach is used for selecting and prioritizing test cases for testing service-based systems in the context of semantic web services. This work analyzes the two factors of risk estimation: failure probability and importance, from three aspects: ontology data, service and composite service. With this approach, test cases are associated to semantic features, and are scheduled based on the risks of their target features. Risk assessment using Bayesian networks is used to control the process of Web Services progressive group testing, including test case ranking, test case selection and service ruling out.
Wednesday, November 5th, 11:00 am –12:00 noon

Bayesian Analysis of Traffic Flow Data

Speaker: Vadim O. Sokolov, Argonne National Labs Transportation Systems Modeling Group


In this talk we consider the problem of estimating the state of traffic flow using filtering techniques that rely on an analytical model of traffic flow. The goal is to get as accurate estimation as possible of the current traffic conditions based on the sparse and noisy measurement from in-ground induction loop detectors. In practice this information is provided to travelers, who make decisions on routes, and transportation system managers that use it for forecasting traffic conditions for the next 15-30 minutes in order to apply appropriate control strategies, such as route guidance or flow control through ramp metering. Existing filtering algorithms are limited in their ability to properly capture the nonlinear nature of the system dynamics, mixture nature of state uncertainty as well as non-Gaussian sensor models. Here we develop and apply a computationally efficient particle filter based algorithm to address this problem. We apply our algorithm to a data set of measurements from the Illinois interstate highway system.
Friday, October 24th, 11:00 am –12:00 noon

Hedging Demand and Supply Risks in Inventory Models

Speaker: Süleyman Özekici, Department of Industrial Engineering, Koç University, Istanbul, Turkey


We consider a single-period inventory model where there are risks associated with the uncertainty in demand as well as supply. Furthermore, the randomness in demand and supply is correlated with the financial markets. Recent literature provides ample evidence on this issue. The inventory manager may then exploit this correlation and manage his risks by investing in a portfolio of financial instruments. The decision problem therefore includes not only the determination of the optimal ordering policy, but also the selection of the optimal portfolio at the same time. We analyze this problem in detail and provide a risk-sensitive approach to inventory management where one considers both the mean and the variance of the resulting cash flow. The analysis results in some interesting and explicit characterizations on the structure of the optimal policy. The emphasis is on the impact of hedging and risk reduction.
Friday, October 17th, 11:00 am –12:00 noon

Economic and Environmental Assessment of Remanufacturing Strategies for Product + Service Firms

Speaker: Gal Raz, Darden School of Business, University of Virginia


This article provides a data-driven assessment of economic and environmental aspects of remanufacturing for product+ service firms. A critical component of such an assessment is the issue of demand cannibalization. We therefore present an analytical model and a behavioral study which together incorporate demand cannibalization from multiple customer segments across the firm’s product line. We then perform a series of numerical simulations with realistic problem parameters obtained from both the literature and discussions with industry executives. Our findings show that remanufacturing frequently aligns firms’ economic and environmental goals by increasing profits and decreasing the total environmental impact. We show that in some cases, an introduction of a remanufactured product leads to no changes in the new products’ prices (positioning within the product line), implying a positive demand cannibalization and a decrease in the environmental impact; this provides support for a heuristic approach commonly used in practice. Yet in other cases, the firm can increase profits by decreasing the new product’s prices and increasing sales—a negative effective cannibalization. With negative cannibalization the firm’s total environmental impact often increases due to the growth in new production. However, we illustrate that this growth is nearly always sustainable, as the relative environmental impacts per unit and per dollar rarely increase.
Friday, October 10th, 11:00 am –12:00 noon

Spring 2014

Estimation and Mitigation of Downside Risks in Project Portfolio Selection

Speaker: Janne Kettunen, The George Washington University


When projects are selected based on uncertain ex ante estimates about how much value they will yield ex post, projects whose values have been overestimated are more likely to be selected. Thus, the estimated value of the project portfolio, expressed as the sum of value estimates for selected projects, tends to be higher than the realized portfolio value obtained as the sum of ex post values of these projects. It is known that the resulting overestimation of expected portfolio value can be eliminated by employing revised value estimates based on Bayesian updating (Vilkkumaa et al., 2014). In this work, we show that the uncertainties in estimating projects’ values, combined with the selection of a subset of projects, has major implications for the development of risk estimates about portfolio value. First, if downside risks are measured in terms of lower percentiles of the distribution of portfolio value, the estimates will have systematic upward or downward bias depending on correlations between project values and between estimation errors. Second, even if Bayesian updating of value estimates in many cases improves the accuracy of risk estimates, it will not yield unbiased estimates. Third, to improve the accuracy of risk estimates, we propose the use of calibration curves which can be derived by analyzing past selection processes or by simulating the portfolio selection process. We consider the introduction of risk constraints as well, but show that this approach may yield risk estimates which are too optimistic in that the estimated portfolio values in lower percentiles are well above their actual levels.

Friday, May 9th, 3:00 PM – 4:00 PM
Duques Hall, Room 553 (801 22nd Street NW)

Visualizing Survey Operations

Speaker: Fred Highland, Lockheed Martin Information Systems and Global Solutions


Modern survey data collection systems must balance cost and quality while supporting multiple response modes (paper, internet, telephone and personal interview) and addressing unpredictable respondent behavior. The next generation of survey systems utilizes adaptive methods to address these issues adding additional dynamics to already complex systems and raising new challenges to operations management. The presentation will discuss the problem of visualizing this complex collection of information and how it can be used to manage future survey operations. It will provide an overview of modern survey systems and adaptive methods, a review of previous survey management approaches and operations concepts, and begin the discussion on visualizing the operation of the next generation of multi-modal adaptive survey systems.

Thursday, May 8th, 3:00 PM – 4:00 PM
Duques Hall, Room 353 (801 22nd Street NW)

Monthly Clinic Assignments for Residents

Speaker: Jonathan F. Bard, University of Texas at Austin


Upon receiving their degree, medical school graduates enter residencies or training programs in specific specialties. As part of this training, each intern and resident, collectively called housestaff, must spend one or two half-day sessions a week in their assigned continuity clinic. The exact amount of time is a function of their current monthly rotation. In fact, it is the variable clinic hour requirements that drive the scheduling process, and is what distinguishes this problem from most personnel scheduling problems. From the program director’s point of view, the objective is to both maximize clinic hours and minimize the number of violations of a prioritized set of goals while ensuring that certain clinic-level and individual constraints are satisfied. The corresponding problem is formulated as an integer goal program and a three-phase methodology is proposed to find solutions. After pre-processing, a commercial solver is used to obtain tentative solutions and then improvements are made in a post-processing step. The effectiveness of the methodology is demonstrated by analyzing eight monthly rosters provided by the Internal Medicine Residency Program at the University of Texas Health Science Center in San Antonio. On average, we were able to assign up to 7.62% more clinic sessions with far fewer violations of the goals than were seen in the actual schedules worked.

Monday, May 5th, 11:00 AM – 12:00 PM
Duques Hall, Room 553 (801 22nd Street NW)

Counter-terrorism Decisions Using Asymmetrically Prescriptive/Descriptive Game Theory

Speaker: Jason R. W. Merrick, Virginia Commonwealth University


Counter-terrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. In this talk, we discuss techniques from decision analysis and game theory and more recent hybrid approaches, intelligent adversary risk analysis and adversarial risk analysis. We continue by questioning the descriptive validity of the adversarial parts of the model. Classical game theory assumes Expected Utility Theory preferences. However, a growing body of work suggests that while EUT is the best prescriptive model of preferences, it is not the best descriptive model. Prospect theory is presently the leading descriptive theory of choice under uncertainty, but prior work in game theory has only considered special cases of prospect theory for risk. We propose an asymmetrical prescriptive/descriptive approach to game theory that uses a hybrid of expected utility and prospect theory preferences over risk. Our results are applicable to decision analysis situations where one is advising a client decision maker what they should do in a competitive, interactive situation, while modeling what the other decision makers will do. We study the effects of this approach in several sequential decisions, including whether to screen containers entering the US for radioactive materials, how to an incumbent company should respond to a new entrant in their market, and price setting in a simple supply chain.

Friday, April 25th, 3:30 PM – 4:30 PM
Duques Hall, Room 353 (801 22nd Street NW)

An Empirical Analysis of Price, Quality, and Incumbency in Procurement Auctions

Speaker:Professor Tunay Tunca, Department of Decision, Operations, and Information Technology at Robert H. Smith School of Business, University of Maryland


The use of multi-attribute auctions for procurement of products and services when both price and quality matter is becoming more frequent. Such auctions often employ scoring rules and are open-ended in winner determination. Yet there is a significant gap in the literature on studying the efficiency of these procurement mechanisms. In this paper, providing a theoretical model and utilizing data from legal service procurement auctions, we study how open-ended scoring auctions can be used effectively in procurement, and demonstrate the roles supplier quality and incumbency play in this process. We demonstrate that open-ended auctions can generate substantial savings to a buyer without compromising quality. We study the underlying mechanism and show how the auction format can work to achieve such performance. We find that the buyer’s revealed preferences significantly differ from her stated preferences. Finally, we contribute to the understanding of the role of incumbency in procurement auctions by providing evidence that what may be perceived as incumbency bias can in fact be a revelation of preference for quality.

Friday, April 18th, 11:00 AM – 12:00 PM
Duques Hall, Room 353 (801 22nd Street NW)

Mathematical Programming Approaches for Multi-vehicle Path Coordination Under Communication Constraints

Speaker:Professor Hande Benson, Decision Sciences Department, Drexel University


We present a mathematical programming approach for generating time-optimal velocity profiles for a group of vehicles that must follow fixed and known paths while maintaining communication connectivity. Each vehicle is required to arrive at its goal as quickly as possible and stay in communication with a certain number of other vehicles in the arena throughout its journey. This problem arises frequently in emergency response, particularly search-and-rescue efforts, in routing fleets of driverless vehicles, and in urban security and warfare applications. We formulate the centralized problem as a discrete-time mixed-integer nonlinear programming problem (MINLP) with constraints on vehicle kinematics, dynamics, collision avoidance, and communication connectivity. We investigate the efficient solution of the MINLP and the scalability of the proposed approach by testing scenarios involving up to fifty (50) vehicles. Finally, we present results on the corresponding decentralized problem.

Friday, April 4th, 2:30 PM – 3:30 PM
Duques Hall, Room 453 (801 22nd Street NW)

Infrastructure Network Protection

Speaker:Melike Baykal-Gürsoy, Department of Industrial and Systems Engineering, Rutgers University


Network security against possible attacks involves making decisions under uncertainty. In this talk, we present game-theoretic models of allocating defense effort among nodes of a network. We consider both the static and dynamic games. We derive the unique equilibrium strategy pair in closed form for a simple static game. We consider the case that the network’s defender does not know the adversary’s motivation for intruding on the network – e.g., to bring the maximal damage to the network or to infiltrate into the network for other purposes. We illustrate and analyze the consequences of taken this uncertainty into account with a simple Bayesian game model. We show how information about this factor can be used to increase the efficiency of the optimal protection strategy. We also prove that the attack strategy has node-sharing structure. Presentation will conclude with a discussion of future research.

Friday, Feb 28th, 11:00 AM – 12:00 PM
Location: Duques Hall, Room 453 (801 22nd Street NW)

Incorporating unobserved heterogeneity in Weibull survival models: A Bayesian approach

Speaker:Mark Steel, Department of Statistics, University of Warwick, UK


We propose flexible classes of distributions for survival modelling that naturally deal with both the presence of outlying observations and unobserved heterogeneity. We present the family of Rate Mixtures of Weibull distributions, for which a random effect is introduced through the rate parameter. This family contains i.a. the well-known Lomax distribution and can accommodate flexible hazard functions. Covariates are introduced through an Accelerated Failure Time model and we explicitly take censoring into account. We construct a weakly informative prior that combines the structure of the Jeffreys prior with a proper (informative) prior. This prior is shown to lead to a proper posterior distribution under mild conditions. Bayesian inference is implemented by means of a Metropolis-within-Gibbs algorithm. The mixing structure is exploited in order to provide an outlier detection method. Our methods are illustrated using two real datasets, one concerning bone marrow transplants and another on cerebral palsy.

Thursday, Feb 13th, 4:00 pm – 5:00 pm
Location: Phillips Hall, Room 109 (801 22nd Street NW)

Overcoming the Planning Fallacy

Speaker:Yael Grushka-Cockayne, Darden School of Business, University of Virginia


How an organization manages its projects is critical to its success. Yet, firms routinely experience the Planning Fallacy: projects are delivered late, over-budget, or with reduced scope. We investigate project performance and describe our work with UK Department of Transport, Network Rail on how to plan for the planning fallacy. Using concepts from foresting and aggregating expert opinions, we propose methods for overcoming the fallacy.

Friday, January 31th, 11:00AM-12:00PM
Location: Duques 453 (2201 G Street, NW)

Storming Towards Scalable Bayesian Sequential Inference

Speaker: Simon Wilson, School of Computer Science and Statistics Trinity College, Dublin, Ireland


In this talk I describe our initial work with implementing sequential inference methods using Storm. Storm is an open source, distributed, fault tolerant framework for the processing of streaming data. It provides a simple Java interface which supports the creation of powerful streaming algorithms which are automatically parallelised and distributed across a computational cluster. I will briefly describe how Storm works and show how to implement some simple sequential inference algorithms. In conclusion, I discuss how suitable Storm is for implementing Bayesian sequential algorithms such as the particle filter.

Friday, January 10th, 11:00AM-12:00PM
Location: Funger 620 (2201 G Street, NW)

Fall 2013

On Aggregating Probabilistic Information: The Wisdom of (and Problem with) Crowds

Research related to the wisdom of crowds has often shown that aggregation of forecasts through linear opinion pools can provide a much better point estimate of unknown quantities than individual experts/forecasters. We examine how well this idea is translated when dealing with probability forecasts.

Simple Priors for Variance Components: Bayesian Applications of the Root-Uniform Distribution

Recent work has shown that the use of the previously-common Gamma distribution as a non-informative prior for variance components in Bayesian hierarchical models can actually be informative and lead to overly small random effects. As a result, alternatives to the Gamma distribution such as the uniform distribution, the folded noncentral t distribution, and the half-Cauchy distribution are now more prevalent. We introduce the root-uniform distribution as a simple flexible prior distribution for this context.

Wine Futures and Advance Selling under Quality Uncertainty

This paper examines the use of wine futures and advance selling as a form of operational flexibility to mitigate quality rating risk in wine production. At the end of a harvest season, the winemaker obtains a certain number of barrels of wine that can be produced for a particular vintage. After one more year of aging, the wine is bottled, and the reviewers provide another review of the wine, and assign a bottle score that influences the market price of the wine. Advance selling in the form of wine futures offers several benefits to the winemaker. It enables the firm to pass on the risk of holding inventory that is uncertain in value to the consumers.

Simple Priors for Variance Components: Bayesian Applications of the Root-Uniform Distribution

Recent work has shown that the use of the previously-common Gamma distribution as a non-informative prior for variance components in Bayesian hierarchical models can actually be informative and lead to overly small random effects. As a result, alternatives to the Gamma distribution such as the uniform distribution, the folded noncentral t distribution, and the half-Cauchy distribution are now more prevalent. We introduce the root-uniform distribution as a simple flexible prior distribution for this context. The rootuniform generalizes the uniform distribution and has appealing properties.

Spring 2013

The Role of Predictive Distributions in Process Optimization

Quality improvement has been described in a nutshell as “reduction in variation about a target”. Such reduction is driven by the desire to have a high probability of meeting process specifications. However, many statistical quantifications and decisions related to process optimization and response surface analyses are focused only on means, without careful thought to the role of variation and risk assessment. A focus on inference for means is also evident from a review of classical response surface methodology textbooks and popular statistical packages for process optimization.

Vast Search Affect

The American public is often confronted with sensationalized studies that show statistically significant results for one new phenomenon or another. For example, you might read a headline such as this: “Mother’s Depression Linked to Child’s Shorter Height,” ABC News, Sept 10, 2012. The headline is flashy enough to grab attention. But a new level of truth becomes apparent when one takes the time to learn how the study was designed and how the information was analyzed.

Quality of Target Benefits of Proposed Projects: Developing a Generic Construct

Organizational growth is accelerated by successful implementation of projects, hence project selecting and funding are critical organizational decisions. While literature is comprehensive in its discussion on financial analysis of proposed projects, it is weaker when it comes to assessment of non-financial target benefits, those anticipated to be realized after project completion. Consequently, target benefits of proposed projects are often vaguely defined, inflated, and suffer from optimism bias.

Should Event Organizers Prevent Resale of Tickets ?

We are interested in whether preventing resale of tickets benefits the capacity providers for sporting and entertainment events. Common wisdom suggests that ticket resale is harmful to event organizers’ revenues and event organizers have tried to prevent resale of tickets. For instance, Ticketmaster has recently proposed paperless (non-transferrable) ticketing which would severely limit the opportunity to resell tickets.

Adversarial Risk Analysis: Games and Auctions

Adversarial risk analysis is a decision-analytic approach to strategic games. It builds a Bayesian model for the solution concept, goals, and resources of the opponent, and the analyst can then make the choice that maximizes expected utility against that model. Adversarial risk analysis operationalizes the perspective in Kadane and Larkey (1982), and it often enables the analyst to incorporate empirical data from behavioral game theory. The methodology is illustrated in the context of Le Relance, a routing game, and auctions.

Cholesky Stochastic Volatility Models for High-Dimensional Time Series

Multivariate time-varying volatility has many important applications in finance, including asset allocation and risk management. Estimating multivariate volatility, however, is not straightforward because of two major difficulties.

Next Generation of Mathematical Programming Modeling and Solving Tools

During the last 40 years Mathematical Programming (MP) has increasingly found applications in industry in various areas such as finance, marketing, supply chain, energy, data mining and decision analytics. This talk reviews the current state of the art of MP in view of recent developments. Historically, modeling mathematical programming problems has relied on either of two methodologies. The speaker will address these and questions about how today’s companies can implement the next generation of mathematical programming and the benefits they can accrue from it.

Big Data Revolution: Analytics and Optimization

How do companies define Big Data? How are they using it? What strengths are they receiving from Big Data Analytics? We will review what Big Data means, and we will explore case studies from Banking, Insurance, Retail and Healthcare Verticals.

Stable Distributions: Models for Heavy Tailed data

Stable distributions are a class of heavy tailed probability distributions that generalize the Gaussian distribution and that can be used to model a variety of problems. An overview of univariate stable laws is given, with emphasis on the practical aspects of working with stable distributions.

Introduction to R: modeling, computing, visualizing and fun

The use of the R environment for statistical computing and data analysis has exploded over the last decade and R has matured into a mainstream and must-know environment that every serious (and fun loving!) academic should leverage. Given its unrivalled advantages in terms of access, help, flexibility, extensibility and aesthetics, the presentation will showcase the salient features of R so as to provide you with a working knowledge of R.

Fall 2012

Parametric and topological inference for masked system lifetime data

Commonly, reliability data consists of lifetimes (or censoring information) on all components and systems under examination. However, masked system lifetime data represents an important class of problems where the information available for statistical analysis is more limited: one only has failure times for the system as a whole, but no data on the component lifetimes directly, or even which components were failed.

Efficient Distribution of Water Between Head-Reach and Tail-End Farms in Developing Countries

The necessity of surface water for irrigation and its increasing scarcity in developing economies motivate the need for its efficient distribution. The inequity in the distribution of surface water arises due to the relative physical locations of the farms. Head-reach (primary) farms are close to the source while tail-end(secondary) farms are relatively farther. The lack of physical infrastructure implies that water allocated to secondary farms must pass through primary farms. Left to their individual incentives, primary farmers use more than their fair share of water by denying its release to secondary farmers. Such an inequitable sharing results in significantly sub-optimal productivity of the farming community as a whole. 

Extropy: XColony – A Paper Game. An Exotic Tool for Intuitive Decision Making

XColony is a hierarchical modular construction game that aims to train the brain for today’s careers: researcher, scientist, data analyst, or manager. Concepts like abstract thinking, structure, modular approach, duality and isomorphism are emerging from a toy, using a visual language rather than mathematical or formal notations. The game-like environment would make it easier to understand problems in computational architecture, robotics, 3D perception and combinatorial geometry.

Kolmogorov Stories

Asaf Hajiev is a Professor of Mathematics at Baku State University and is a Corresponding Member of the Azerbaijan National Academy of Sciences. He received his doctorate in probability theory from Moscow State University under the supervision of Yuri Belyaev with a specialization in queueing and reliability. His current interests are in probability modeling and statistical inference. He has been a visiting professor at many institutions, including UC Berkeley and Bogacizi University in Istanbul. During his student days at Moscow State University he had first hand interactions with Kolmogorov, as a teacher, a mentor, an advisor, and a friend. In this talk Asaf will relate his experiences with Kolmogorov with a slant towards the personal and the non-academic, and give us some interesting stories about Kolmogorov’s modus operandus with his colleagues, students, and a bevy of scientists and mathematicians who visited him.

Extropy: A complementary dual of entropy

Hospitals clustering via semiparametric Bayesian models: Model based methods for assessing healthcare performance

A Bayesian semiparametric mixed effects models is presented for the analysis of binary survival data coming from a clinical survey on STEMI (ST segment Elevation Myocardial Infarction), where statistical units (i.e., patients) are grouped by hospital of admission. The idea is to exploit the flexibility and potential of such models for carrying out model-based clustering of the random effects in order to profile hospitals according to their effects on patient’s outcome.

Spring 2012

Analysis of Multi-server Ticket Queues with Customer Abandonment

Speaker: Kaan Kuzu, Sheldon B Lubar School of Business, University of Wisconsin-Milwaukee


“Ticket Queues” are the new generation of queuing systems that issue tickets to the customers upon their arrival. The ticket queues differ from the physical queues in terms of the amount of information available to the customers upon their arrival. This study aims at analyzing the system performance of the multi-server ticket queues with reneging and balking customers, who periodically observe their position in the queue and reevaluate their decisions on whether to abandon the system or not. We model the ticket queues using a Markov chain model, and develop two accurate and effective approximation heuristics. These valuation tools enable us provide a method to analyze abandonment probabilities in real systems. Using our analytical model, we analyze the ticket queue data set of a bank to propose a method for separation of customers’ reneging and balking probability.

Friday, May 11, 11:30 AM – 12:30 PM

Location: Duques 652 (2201 G Street, NW)

Subjective Probability: Its Axioms and Acrobatics

Speaker: Nozer D. Singpurwalla, Professor of Statistics and Decision Sciences, The George Washington University


The meaning of probability has been enigmatic, even to the likes of Kolmogorov, and continues to be so. It is fallacious to claim that the law of large numbers provides a definitive interpretation. Whereas the founding fathers, Kardano, Pascal, Fermat, Bernoulli, de Moivre, Bayes, and Laplace, took probability for granted, the latter day writers, Venn, von Mises, Ramsey, Keynes, deFinetti, and Borel engaged in philosophical and rhetorical discussions about the meaning of probability. Entering into the arena were also physicists like Cox, Jeffreys, and Jaynes and philosophers like Carnap, Jeffrey, and Popper. Interpretation matters because the paradigm used to process information and act upon it, is determined by perspective. The modern view is that the only philosophically and logically defensible interpretation of probability is that probability is not unique, that it is personal, and therefore subjective. But to make subjective probability mathematically viable, one needs axioms of consistent behavior. The Kolmogorov axioms are a consequence of the behavioristic axioms. In this expository talk, I will review these more fundamental axioms and point out some of the underlying acrobatics that have led to debates and discussions. Besides mathematicians, statisticians, and decision theorists, the material here should be of interest to physical, biological, and social scientists, risk analysts, and those engaged in the art of “intelligence” (Googling, code breaking, hacking, and eavesdropping)..

Friday, April 27, 3:00 PM – 4:00 PM (followed by a wine and cheese reception)

Location: Duques 651(2201 G Street, NW)

Information about Dependence in the Absence and Presence of a Probable Cause

Speaker: Ehsan S. Soofi, Sheldon B Lubar School of Business, University of Wisconsin-Milwaukee


In general, dependence is more complicated than that could be measured by the traditional indices such as the correlation coefficients, its nonparametric counterparts, and the fraction of variance reduction. An information measure of dependence, known as the mutual information, is increasingly being used in the traditional as well as more modern problems. The mutual information, denoted here as M, measures departure of a joint distribution from the independent model. We also view M as an expected utility of variables for prediction. This view integrates ideas from the general dependence literature and the Bayesian perspectives. We illustrate the success of this index as a “common metric” for comparing the strengths of dependence within and between families of distributions in contrast with the failures of the popular traditional indices.For the location-scale family of distributions, an additive decomposition of M gives the normal distribution as the unique minimal dependence model in the family. An implication for practice is that the popular association indices underestimate the dependence of elliptical distributions, severely for models such as t-distributions with low degrees of freedom. A useful formula for M of the convolution of random variables provides a measure of dependence when the predictors and the error term are normally distributed jointly or individually, as well as under other distributional assumptions. Finally, we draw attention to a caveat: M is not applicable to continuous variables when their joint distribution is singular, due to a “probable cause” for the dependence. For an indirect application of M to singular models, we propose a modification of the mutual information index, which retains the important properties of the original index and show some potential applications.

Friday, April 27, 11:30 AM – 12:30 PM

Location: Funger 320 (2201 G Street, NW)

Optimal Stopping Problem for Stochastic Differential Equations with Random Coefficients

Speaker: Mou-Hsiung (Harry) Chang, Mathematical Sciences Division, U.S. Army Research Office


This talk is based on the paper “Optimal Stopping Problem for Stochastic Differential Equations with Random Coefficients”, Mou-Hsiung Chang, Tao Pang, and Jiongmin Yong, SIAM J. Control & Optimization, vol. 48, No. 2, pp. 941-971, 2009. The paper received the 2011 SIAM Control and Systems Activity Group best paper award. In this talk we consider an optimal stopping problem for stochastic differential equations with random coefficients. The dynamic programming principle leads to a Hamilton-Jacobi-Bellman equation, which, for the current case, is a backward stochastic partial differential variational inequality (BSPDVI, for short) for the value function. Well-posedness of such a BSPDVI is established, and a verification theorem is proved.

Friday, April 13, 4:00 PM – 5:00 PM

Location: Duques 553 (2201 G Street, NW)

Fallacies of Certainty in Operational Decision Models

Speaker: Dr. Suvrajeet Sen, Professor in Integrated Systems Engineering, Ohio State University


For most practitioners in government and industry, uncertainty is a fact of life. Yet, decision aids for many operational questions set aside uncertainty because they are supposedly difficult to either model, or solve, or both. Drawing upon several industrial applications (network planning, inventory control etc.) we will demonstrate that the state-of-the art for including uncertainty in decision models has come a long way. We will present the case that the boom in business analytics, coupled with algorithmic advances in stochastic programming provide unique opportunity for models that provide better support for operational decisions under uncertainty.

Short Biography:

Suvrajeet Sen is Professor of Integrated Systems Engineering at The Ohio State University (OSU). Until recently, he was also the Director of the Center for Energy, Sustainability, and the Environment at OSU. Prior to joining OSU, he was a Professor at the University of Arizona, and also served as a program director at NSF where he was responsible for the Operations Research, and the Service Enterprise Engineering programs. Starting in August 2012, he will assume a position on the faculty at the University of Southern California. Professor Sen is a Fellow of INFORMS. He has served on the editorial board of several journals, including Operations Research as Area Editor for Optimization, and as Associate Editor for INFORMS Journal on Computing, and Journal of Telecommunications Systems. Professor Sen founded the INFORMS Optimization Section.

Friday, March 30, 11:00 AM – 12:00 PM

Location: Funger 620 (2201 G Street, NW)

Managing Opportunistic Supplier Product Adulteration: Deferred Payments, Inspection, and Combined Mechanisms

Speaker: Volodymyr Babich, McDonough School of Business, Georgetown University


Recent cases of product adulteration by foreign suppliers have compelled many manufacturers to re-think approaches to deterring suppliers from cutting corners, especially when manufacturers cannot fully monitor and control the suppliers’ actions. In this paper we study three mechanisms for dealing with product adulteration problem: (a) the deferred payment mechanism: the buyer pays the supplier after the deferred payment period only if no adulteration has been discovered by the customers; (b) the inspection mechanism: the buyer pays the supplier immediately, contingent on product passing the inspection; and (c) the combined mechanism: a combination of the deferred payment and inspection mechanisms. We show that the inspection mechanism cannot completely deter the suppliers from product adulteration, while the deferred payment mechanism can. Surprisingly, the combined mechanism is redundant: either the inspection or the deferred payment mechanisms perform just as well. Finally, we identify four factors that determine the dominance of deferred payment mechanism over the inspection mechanism are: (a) the inspection cost relative to inspection accuracy, (b) the buyer’s liability for adulterated products, (c) the difference in financing rates for the buyer and the supplier relative to the defects discovery rate by customers, and (d) the difference in production costs for adulterated and unadulterated product. We find that the deferred payment mechanism is preferable to inspection if the threat of adulteration (either incentive to adulterate or the consequences) are low. The paper is available (here).

Friday, March 23, 3:30 – 4:30 PM

Location: Duques 553 (2201 G Street, NW)

Semi-parametric Bayesian Modeling of Spatiotemporal Inhomogeneous Drift Diffusions in Single-Cell Motility

Speaker: Ioanna Manolopoulou, Department of Statistical Science, Duke University


We develop dynamic models for observations from independent time series influenced by the same underlying inhomogeneous drift. Our methods are motivated by modeling single cell motion through a Langevin diffusion, using a flexible representation for the drift as radial basis kernel regression. The primary goal is learning the structure of the tactic fields through the dynamics of lymphocytes, critical to the immune response. Although individual cell motion is assumed to be independent, cells interact through secretion of chemicals into their environment. This interaction is captured as spatiotemporal changes in the underlying drift, allowing us to flexibly identify regions in space where cells influence each other’s behavior. We develop Bayesian analysis via customized Markov chain Monte Carlo methods for single cell models, and multi-cell hierarchical extensions for aggregating models and data across multiple cells. Our implementation explores data from multi-photon vital microscopy in murine lymph node experiments, and we use a number of visualization tools to summarize and compare posterior inferences on the 3-dimensional tactic fields.

Friday, March 23, 11:00 AM – 12:00 PM

Location: Duques 553 (2201 G Street, NW)

Dynamic Multiscale Spatio-Temporal Models for Gaussian Areal Data

Speaker: Marco A. Ferreira, Department of Statistics, University of Missouri – Columbia


We introduce a new class of dynamic multiscale models for spatio-temporal processes arising from Gaussian areal data. Specifically, we use nested geographical structures to decompose the original process into multiscale coefficients which evolve through time following state-space equations. Our approach naturally accommodates data observed on irregular grids as well as heteroscedasticity. Moreover, we propose a multiscale spatio-temporal clustering algorithm that facilitates estimation of the nested geographical multiscale structure. In addition, we present a singular forward filter backward sampler for efficient Bayesian estimation. Our multiscale spatiotemporal methodology decomposes large data-analysis problems into many smaller components and thus leads to scalable and highly efficient computational procedures. Finally, we illustrate the utility and flexibility of our dynamic multiscale framework through two spatio-temporal applications. The first example considers mortality ratios in the state of Missouri whereas the second example examines agricultural production in Espirito Santo State Brazil.

Friday, March 9, 2:00 – 3:00 PM

Location: Duques 453 (2201 G Street, NW)

On Coverage & Detection Problems in Sensor Networks

Speaker: Dr. Bimal Roy, Director, Indian Statistical Institute, Kolkata


Since a sensor has limited communication capability, covering a “field” with sensors so that the communication in the network is smooth is a challenging problem. A method of dropping sensors from a helicopter and then using an actuator (robot with limited intelligence and carrying capability) to make minor adjustments is proposed. Once the sensors are placed, detecting an event (say for example, an explosive) is the next challenge. Assuming a model for sensing, a method based on standard test of hypothesis is proposed.

Wednesday, March 7, 4:00 – 5:00 PM

Location: Duques 453 (2201 G Street, NW)

Business Analytics Degrees: Disruptive Innovation or Passing Fad?

Speaker: Michael Rappa, Founding Director, Institute for Advanced Analytics, North Carolina State University


Recently more and more schools have begun offering degrees in business analytics. This talk will use the nation’s first Master of Science in Analytics, now in its fifth year, as a backdrop to discuss the rise of analytics degree programs and the implications for business schools. In a future where data-driven decisions will be critically important to the success of business, will analytics become the impetus for disruptive innovation that transforms business education ? Or is analytics simply the latest in a long line of management fads soon to be forgotten ?

Wednesday, March 7, 10:30-11:45 AM

Location: Duques 453 (2201 G Street, NW)

Adaptive Convex Enveloping for Multidimensional Stochastic Dynamic Programming

Speaker: Sheng Yu, Engineering Management & Systems Engineering, George Washington University


Adaptive Convex Enveloping is a powerful general purpose method for solving convex stochastic dynamic programs. With its optimization-oriented design, Adaptive Convex Enveloping easily handles large numbers of decision variables and constraints with the speed and reliability of convex optimizations, and approximates the value function with error control on the entire state space. We discuss interesting aspects and strengths of the new method, and use it on battery station management to find the optimal policy for charging electric vehicle batteries.

Friday, March 2, 11:00-12:00PM

Location: Funger 420 (2201 G Street, NW)

Providers’ Profiling for Supporting Decision Makers in Cardiovascular Healthcare

Speaker: Francesca Ieva, Dipartimento di Matematica “F.Brioschi”, Politecnico di Milano


Investigations on surgical performance have always adopted routinely collected clinical data to highlight unusual providers outcomes. In addition,there are a number of regular reports using routinely collected data to produce indicators for hospitals. As well as highlighting possible high- and low-performers, such reports help in understanding the reasons behind variation in health outcomes, and provide a measure of performance which may be compared with benchmarks or targets, or with previous results to examine trends over time. Statistical methodology for provider comparisons has been developed in the context of both education and health. It is known that pursuing the issue of adjustment for patient severity (case-mix) is a challenging task, since it requires a deep knowledge of the phenomenon from a clinical, organizational, logistic and epidemiological point of view. However, this is the reason why it is always expected to be inadequate and therefore unavoidable residual variability (over-dispersion) will generally exist between providers. It is then crucial that a statistical procedure is able to assess whether a provider may be considered “unusual”. In particular, note that although hierarchical models arerecommended since they account for the nested structure in describing hospitals performance, it is not straightforward how assessing unusual performance. Studies of variations in health care utilization and outcomes involve the analysis of multilevel clustered data. Those studies quantify the role of contributing factors (patients and providers) and assess the relationship between health-care processes and outcomes. We develop Bayes rules for different loss functions for hospital report cards when Bayesian Semiparametric Hierarchical models are used, and discuss the impact of assuming different loss functions on the number of hospitals identified as “non acceptably performing”. The analysis is carried out on a case study dataset arising from one of the clinical survey arising from Strategic Program of Regione Lombardia, concerning patients admitted with STEMI to one of the hospitals of its Cardiological Network. The major aim consists of the comparison among different loss functions to discriminate among health care providers’ performances, together with the assessment of the role of patients’ and providers’ characteristics on survival outcome. The application of this theoretical setting to the problem of managing a Cardiological Network is an example of how Bayesian decision theory could be employed within the context of clinical governance of Regione Lombardia. It may point out where investments are more likely to be needed, and could help in not to loose opportunities of quality improvement.

Tuesday, February 21st 11:00 AM – 12:00 PM

Location: Funger 420 (2201 G Street, NW)

Fairness in Sharing Gains and Losses

Speaker: Dr. Luc Wathieu, Associate Professor, McDonough School of Business, Georgetown

Authors: Luc Wathieu (Georgetown University), Guillermo Baquero (ESMT, Berlin), Willem Smit (Singapore Management University)


We conducted an experimental exploration of ultimatum games involving gains and losses of varying amounts. Proposers indicated their offer in gain- (and neatly comparable) loss- games, respondents indicated minimum acceptable gain and maximum acceptable loss (n=326). We find a significant “generosity effect”: Proposers take the lion’s share of gains but respondents endure less than 50% of losses. We explain our results with a “Fairness Requirement Theorem” involving reference dependence and loss aversion.

Friday, February 17, 2012, 3:00-4:00 PM

Location: Duques Room 553 (2201 G Street, NW)

Data & Analytics – Enabling Consumer Preference

Speaker: Raghan Lal, Head of Analytics, VISA

Friday, February 10, 2012, 11:00-12:00PM

Location: Duques Room 651 (2201 G Street, NW)

Optimization and Resource Allocation Models in an Aviation Security System

Speaker: Rajan Batta, Professor of Industrial and Systems Engineering, University of Buffalo


This talk will summarize results from a recently completed NSF project related to airport security modeling. The general theme is the development and analysis of optimization and resource allocation models. The first part of the talk will delineate results for a model that focuses on security in the area prior to checkpoint screening. The second part of the talk will develop and present three separate models that all focus on improving the efficiency of checkpoint screening. For the second part of the talk implementation issues will also be discussed.

Thursday, February 9, 2012, 10:30-11:30 AM

Location: Duques Room 520 (2201 G Street, NW)

Solving Two-Stage Stochastic Steiner Tree Problems by Two-Stage Branch-and-Cut

Speaker: Ivana Ljubic (Decision, Operations, and Information Technologies Department, The Robert H. Smith School of Business, University of Maryland)


Network design problems frequently occur in various practical areas, e.g., in the design of fiber optic networks or in the development of district heating or water supply systems. Most of the network design problems are NP-hard combinatorial optimization problems. In practice, network design problems are often subject to uncertainty of the input data. It might happen that the actual demand patterns or connection costs become known only after the network has been built. In that case, networks found by solving an instance in which it is assumed that the complete knowledge of the input is known up-front, might not provide appropriate solutions if deviations from the assumed scenario are encountered. Stochastic and robust optimization are two promising ways to take these uncertainties into account. In this talk we consider the Steiner tree problem under a two-stage stochastic model with recourse and finitely many scenarios. In this problem, edges are purchased in the first stage when only probabilistic information on the set of terminals and the future edge costs is known. In the second stage, one of the given scenarios is realized and additional edges are purchased in order to interconnect the set of (now known) terminals. The goal is to decide on the set of edges to be purchased in the first stage while minimizing the overall expected cost of the solution. We consider mixed integer programming formulations for this problem and propose a two-stage branch-and-cut (B&C) approach in which L-shaped and integer-L-shaped cuts are generated. In our computational study we compare the performance of two variants of our algorithm with that of a B&C algorithm for the extensive form of the deterministic equivalent (EF). We show that, as the number of scenarios increases, the new approach significantly outperforms the (EF) approach.

This is a joint work with Immanuel Bomze (University of Vienna), Markus Chimani (University of Jena), Michael Juenger (University of Cologne), Petra Mutzel and Bernd Zey (TU Dortmund).

Friday, January 27, 2012, 11:00-12:00PM

Location: Duques Room 521 (2201 G Street, NW)

Title: Markov Chain Monte Carlo for Inference on Phase-Type Models

Speaker: Simon Wilson, School of Computer Science and Statistics Trinity College, Dublin, Ireland


Bayesian inference for phase-type distributions is considered when data consist only of absorption times. Extensions to the methodology developed by Bladt et al. (2003) are presented which enable specific structure to be imposed on the underlying continuous time Markov process and expand computational tractability to a wider class of situations. The conditions for maintaining conjugacy when structure is imposed are shown. Part of the original algorithm involves simulation of the unobserved Markov process and the main contribution is resolution of computational issues which can arise here. Direct conditional simulation, together with exploiting reversibility when available underpin the changes. Ultimately, several variants of the algorithm are produced, their relative merits explained and guidelines for variant selection provided. The extended methodology thus advances modelling and tractability of Bayesian inference for phase-type distributions where there is direct scientific interest in the underlying stochastic process: the added structural constraints more accurately represent a physical process and the computational changes make the technique practical to implement. A simple application to a repairable redundant electronic system when ultimate system failure (as opposed to individual component failure) comprise the data is presented. This provides one example of a class of problems for which the extended methodology improves both parameter estimates and computational speed.

Friday, January 6, 2012, 11:15 AM – 12:15 PM

Location: Duques Room 553 (2201 G Street, NW)

Fall 2011

An Overview of Drinking Water Laws, Regulations, and Policy

Speaker: J. Alan Roberson, Director of Federal Relations, American Water Works Association


This presentation will summarize the evolution of drinking water laws and regulation starting with the passage of the initial Safe Drinking Water Act (SDWA) in 1974 and subsequent amendments in 1986 and 1996. The Environmental Protection Agency (EPA) has published 18 major drinking water regulations between 1976 and 2006, and the evolution of these regulations will be discussed, how contaminants are selected for regulation and how the numerical standards are developed. The policy aspects of the regulatory development process will be discussed, along with how politics can shape drinking water regulations within the current statutory framework.

Time: Friday, November 4, 2011, 3:30 PM – 4:30 PM

Location: Duques Room 553 (2201 G Street, NW)

Bayes’ Rule: The Theory That Would Not Die

Speaker: Sharon Bertsch McGrayne

Sponsored by: The Departments of Physics, Statistics, The Institute for Integrating Statistics in Decision Sciences, and The Institute for Reliability and Risk Analysis of GWU.


From spam filters and machine translation to the drones over bin Laden’s compound, Bayes’ rule pervades modern life. Thomas Bayes and Pierre-Simon Laplace discovered the rule roughly 250 years ago but, for most of the 20th century, it was deeply controversial, almost taboo among academics. My talk will range over the history of Bayes’ rule, highlighting Alan Turing who decrypted the German Enigma code and Jerome Cornfield of NIH and George Washington University who established smoking as a cause of lung cancer and high cholesterol as a cause of cardiovascular disease. The talk will be based on my recent book, The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines & Emerged Triumphant from Two Centuries of Controversy (Yale University Press).

About the speaker:

Sharon Bertsch McGrayne is also the author of Nobel Prize Women in Science (National Academy Press), and Prometheans in the Lab (McGraw-Hill).

A former newspaper reporter and co-author of The Atom, Electricity & Magnetism (Encyclopaedia Britannica). She has been a panelist on NPR’s Science Friday, and her work has been featured on Charley Rose. She has written for Scientific American, APS News, Science, Isis, the Times Higher Education Supplement, and other publications. Her books have been reviewed by Nature, Chemical & Engineering News, New Scientist, JAMA, Physics Today, Scientific American, Science Teacher, American Journal of Physics, Physics Teacher, Popular Mechanics, and others. Her webpage is at www.McGrayne.com.

Time: Friday, October 21, 2011, 4:00 PM

Location: Duques Room 651 (2201 G Street, NW)

Unit Root Tests – A Review

Speaker: Sastry G. Pantula, Director of the Division of Mathematical Sciences at the National Science Foundation


Unit root tests in time series analysis have received considerable amount of attention since the seminal work of Dickey and Fuller (1976). In this talk, some of the existing unit root test criteria will be reviewed. Size, power and robustness to model misspecification of various unit root test criteria will be discussed. Unit root tests where the alternative hypothesis is a unit root process will be discussed. Tests for trend stationarity versus difference stationary models will be discussed briefly. Current work on unit root test criteria will also be discussed. Examples of unit root time series testing will be presented. Extensions to multivariate and heteroscedastic models will be discussed.

About the speaker:

Sastry Pantula received his B.Stat and M.Stat from the Indian Statistical Institute, Kolkata and a Ph.D. In Statistics from Iowa State University. He has been a faculty member at North Carolina State University since 2002. He served as the Director of Graduate Programs from 1994-2002, and as the Department Head from 2002-2010. He is the 2010 ASA President. Currently, he is on loan to the National Science Foundation and serving as the Director of Division of Mathematical Sciences.

Time: Friday, September 30th 4:00-5:00 pm

Location: Duques 553 (2201 G Street, NW), Followed by Wine and Cheese Reception

Unlocking Online Communities: How to measure the weight of the silent masses in authority ranking?

Speaker: Ahmed A. Gomaa, Imedia Streams, LLC


ISocial media has increasingly been used by enterprises for reaching out to their customers for advertising campaigns, receiving product reviews, and users’ preferences for new product development. This requires extraction and aggregation of information in the social media space for facilitating the decision making process. A key challenge is to automate this process of information discovery, extraction, and aggregation along relevant dimensions such as age, gender, location,interest, sentiment and authority. We have developed iPointTM, a system that enables the discovery, extraction and aggregation of social media, measuring the sentiments depicted online, providing an authority score for each author based on their interests along with the authors age, gender and location. We then use this information in conjunction with our ad server iServeTM. We use the derived intelligence from iPointTM as a daily updated internet panel that measures the internet waves to help distribute ads accordingly within advertising networks. Positive results are recorded by comparison to existing targeting technologies using both Yahoo! Right media exchange and Google content network. In this presentation we will focus on our authority ranking model which depends on Eigen Value calculations where we consider the number of posts by each author, the number of links and back comments on the posts, the relevancy of the post within each community and the amount of silent interaction with the posts. We present how we calculate the silent interactions in our model and how we use sparse matrix properties to optimize the calculation and storage time. The authority rank influences the general sentiment of a topic interest level, where sentiments from a highly ranked, more influential author have more weight than the sentiments of a less influential author, thus the community direction.

Time: Friday, September 16th 3:30-4:30 pm

Location: Duques 553 (2201 G Street, NW)

On implications of demand censoring in the newsvendor problem

Speaker: Alp Muharremoglu, School of Business, UT Dallas


We consider a repeated newsvendor problem in which the decisionmaker (DM) does not have access to the underlying distribution of discrete demand. We analyze three informational settings: i.) the DM observes realized demand in each period; ii.) the DM only observes realized sales; and iii.) the DM observes realized sales but also a lost sales indicator that records whether demand was censored or not. We analyze the implications of censoring on performance and key characteristics that effective policies should possess. We provide a characterization of the best achievable performance in each of these cases, where we measure performance in terms of regret: the worst case difference between the cumulative costs of any policy and the optimal cumulative costs with knowledge of the demand distribution. In particular, we show that for both the first and the third settings, the best achievable performance is bounded (i.e., does not scale with the number of periods) while in the second setting, it grows logarithmically with the number of periods. We link the latter degradation in performance to the need for continuous exploration with sub-optimal decisions and provide a characterization of the frequency with which this should occur.

Time: Friday, October 14th 11:00-12:00 pm

Location: Duques 553 (2201 G Street, NW)

Spring 2011

Key Management and Key Pre-distribution

Speaker: Dr. Bimal Roy, Director, Indian Statistical Institute, Kolkata, India


In modern Cryptography, the security of a cryptosystem lies on secrecy of the key, not on secrecy of the encryption algorithm. Hence key management is a very important issue. There are several methods for key management, but most of these are based on Public Key Cryptography, which are typically based on Number Theory. Key Pre-Distribution is an alternative method based on Combinatorics. This method may be used for a scenario where security requirement is not so stringent.

Time: Wednesday, April 20th 3:30-4:30 pm

Location: Duques Hall, Room 553 (2201 G Street NW)

Particle Learning for Fat-tailed Distributions

Speaker: Hedibert Lopes, University of Chicago Booth School of Business


It is well-known that parameter estimates and forecasts are sensitive to assumptions about the tail behavior of the error distribution. In this paper we develop an approach to sequential inference that also simultaneously estimates the tail of the accompanying error distribution. Our simulation-based approach models errors with a t-distribution and, as new data arrives, we sequentially compute the marginal posterior distribution of the tail thickness. Our method naturally incorporates fat-tailed error distributions and can be extended to other data features such as stochastic volatility. We show that the sequential Bayes factor provides an optimal test of fat-tails versus normality. We provide an empirical and theoretical analysis of the rate of learning of tail thickness under a default Jeffrey’s prior. We illustrate our sequential methodology on the British pound/US dollar daily exchange rate data and on data from the 2008-2009 credit crisis using daily S&P500 returns. Our method naturally extends to multivariate and dynamic panel data.

Time: Thursday, April 7th 11:30-12:30pm

Location: Duques 652 (2201 G Street, NW)

The Planning of Guaranteed Targeted Display Advertising

Speaker: John Turner, University of California, Irvine


As targeted advertising becomes prevalent in a wide variety of media vehicles, planning models become increasingly important to ad networks that need to match ads to appropriate audience segments, provide a high quality of service (meet advertisers’ goals), and ensure ad serving opportunities are not wasted. We define Guaranteed Targeted Display Advertising (GTDA) as a class of media vehicles that include webpage banner ads, video games, electronic outdoor billboards, and the next generation of digital TV, and formulate the GTDA planning problem as a transportation problem with quadratic objective. By modeling audience uncertainty, forecast errors, and the ad server’s execution of the plan, we derive sufficient conditions that state when our quadratic objective is a good surrogate for several ad delivery performance metrics. Moreover, our quadratic objective allows us to construct duality-based bounds for evaluating aggregations of the audience space, leading to two efficient algorithms for solving large problems: the first intelligently refines the audience space into successively smaller blocks, and the second uses scaling to find a feasible solution given a fixed audience space partition. Near-optimal schedules can often be produced despite significant aggregation.

Time: Friday, March 25th 3:30-4:30pm

Location: Duques 553 (2201 G Street, NW)

Optimal dispatching models for server-to-customer systems with classification errors

Speaker: Laura A. McLay, Department of Statistical Sciences and Operations Research, Virginia Commonwealth University


How to dispatch servers to prioritized, spatially-located customers is a critical issue in server-to-customer systems. Such decisions are complicated when servers have different operating characteristics, customers are prioritized, and there are errors in assessing customer priorities. This research provides a model for optimizing dispatching protocols using infinite horizon, average cost Markov decision process models. The proposed model determines how to optimally dispatch heterogeneous servers to customers to maximize the long run average utility in a Markov decision process. Our model sheds light on when to dispatch the closest server to a customer and when to dispatch a farther server to a customer. Dispatching is complicated when servers must be both efficiently and equitably dispatched to customers. Four types of equity side constraints are considered that reflect customer and server equity. The equity constraints draw upon the decision analytic and social science literature in order to compare the effects of different notions of equity on the dispatching policies. The model has applications to emergency medical services and military medevacs.

Time: Friday, February 4th 11:30-12:30pm

Location: Duques 553 (2201 G Street, NW)

Optimal Dynamic Return Management of Fixed Inventories

Speaker: Mehmet Altug, Department of Decision Sciences, The George Washington University


While the primary effort of all retailers is to generate that initial sales, return management is generally identified as a secondary issue that does not necessarily need the same level of planning. In this paper, we position return management as a process that is at the interface of both inventory and revenue management by explicitly incorporating the return policy of the retailer in consumer’s valuation. We consider a retailer that sells a fixed amount of inventory over a finite horizon. We assume that return policy is a decision variable which can be changed dynamically at every period. According to a hypothesis which is quite prevalent in the retailing industry, while flexible and more generous return policies increase consumer valuation and generate more demand, they also induce more returns. In this environment, we characterize the optimal dynamic return policies based on two costs of return scenarios. We show a conditional monotonicity result and discuss how these return policies change with respect to retailer’s inventory position and time. We then propose a heuristic and prove that it is asymptotically optimal. We also study the joint dynamic pricing and dynamic return management problem in the same setting and propose two more heuristics whose performance is tested numerically and found to be close to optimal for higher inventory levels. We finally extend our model to multiple competing retailers and characterize the resulting equilibrium return policy and prices.

Time: Wednesday, February 2nd 11:00-12:15 pm

Location: Funger 620

Fall 2010

Causes and Consequences of Understaffing in Retail Stores

Speaker: Vidya Mani, Kenan-Flagler Business School, University of North Carolina at Chapel Hill


In this paper we study the causes and consequences of understaffing in a retail store by examining the longitudinal data on store managers’ labor planning decisions and store performance from 41 stores of a large retail chain. By assuming store managers are profit maximizing agents, we impute the cost of labor used by store managers in making their labor planning decisions using a structural estimation technique. We show that store managers of this retail chain differ considerably in their imputed cost of labor and these costs are significantly higher compared to the average hourly wage rate for retail salespersons. This difference partially explains the understaffing observed in retail stores. Furthermore, we show that understaffing is predominantly present during peak hours but such understaffing is an optimal response by store managers due to scheduling constraints. We quantify the consequences of understaffing on store profitability by running counterfactual experiments. Finally, we show that understaffing is negatively associated with store performance measures like conversion rate and basket value.

Time: Friday, December 14th 11:15-12:15 pm

Location: Funger 320 (2201 G Street, NW)

Price-Quoting Strategies of a Tier-Two Supplier

Speaker: Bin Hu, University of Michigan


This paper studies the price-quoting strategies used by a tier-two supplier, whose tier-one customers compete for an OEM’s indivisible contract. At most one of the tier-two supplier’s quotes will ultimately result in downstream contracting and hence produce revenue for her. We characterize the tier-two supplier’s optimal price-quoting strategies and show that she will use one of two possible types of strategies, with her choice depending on the tier-one suppliers’ profit potentials: secure, whereby she will always have business; or risky, whereby she may not have business. Addressing potential fairness concerns, we also study price-quoting strategies in which all tier-one suppliers receive equal quotes. Finally, we show that a tier-two supplier’s optimal mechanism resembles auctioning a single quote among the tier-one suppliers. This paper can assist tier-two suppliers in their pricing decisions, and provides general insights into multi-tier supply chains’ pricing dynamics.

Time: Monday, December 13th 11:30-12:35 pm

Location: Funger 320 (2201 G Street, NW)

Managing Potentially Hazardous Substances from the Firm and NGO Perspective

Speaker: Tim Kraft, Stanford University


As public awareness of environmental hazards increases, a growing concern for corporations is the potential negative environmental impact of their products and the chemicals those products contain. When a substance within a product is identified as potentially hazardous (e.g., bisphenol-A (BPA) in baby bottles and triclosan in soaps and toothpastes), without regulations in place it is often difficult for a firm to financially justify the proactive replacement of the substance. From the perspective of non-governmental organizations (NGOs), groups such as ChemSec play an active role in removing potentially hazardous substances from commercial use by either targeting firms with negative press or by petitioning regulatory bodies to increase the likelihood of regulation. An NGO interested in influencing firms to replace a potentially hazardous substance must develop a strategy for how to best utilize its often limited resources. In this talk, I will present two papers that address these issues.

In the first part of the talk, we study the decisions of firms when a potentially hazardous substance is identified. A firm’s decisions are complicated by uncertainty in substance risk, regulations, and market sensitivity, as well as the existence of external stakeholders such as NGOs who may want the firm to develop a replacement substance. We investigate the timing and intensity of the firm’s investments to replace a substance. A two-stage dynamic program is used to model the problem. Our results indicate that large firms, in particular, must dedicate resources to monitoring and potentially planning the replacement of a substance. Although the additional management will be costly, it may prevent even larger losses such as inventory write-offs, profit losses, or liability costs. In the second part of the talk, we investigate the role NGOs play in removing a potentially hazardous substance from commercial use. We analyze the NGO’s decisions of who to target – the industry or the regulatory body – and how much effort to exert. In addition, we further investigate whether NGOs should take a pragmatic approach and partner with firms or maintain an antagonistic relationship. A game-theoretic, two-stage model is used to model the problem. Our results indicate that pressuring the regulatory body is most effective when the existing likelihood of regulation is low and the expected penalty for not being prepared for regulation is high.

Time: Wednesday, December 8th 11:15-12:20 pm

Location: Duques 651 (2201 G Street, NW)

On the Tradeoff Between Remanufacturing and Recycling

Speaker: Tharanga Rajapakshe, School of Management, The University of Texas at Dallas


For a firm, the dual goals – induced by the drive on Extended Producer Responsibility – of meeting environmental regulations and positioning itself as a socially-responsible entity, necessitate the understanding of supply- and demand-side implications as well as product design characteristics. These, in turn, result in a healthy tradeoff between feasible sustainability measures, thus making the implementation of an appropriate option critical for long-term survival. Motivated by our interactions with two Dallas-based reverse-logistics firms, we analyze the tradeoff between two well-known product- recovery approaches: recycling and remanufacturing. Our setting is that of a manufacturer who produces and markets a product with the objective of maximizing profit. A unit of the product consists of two modules – Module A and Module B – that could each be either remanufactured or recycled. Module B incurs a higher per-unit production cost and is also priced higher than Module A. Once a module is recovered via a take-back mechanism, it can be either used in a remanufactured unit or can be further disassembled and recycled to recover its raw material, which can then be used to produce (albeit with different yields) new units of either Module A or Module B. Any unused units of either the complete product, Module A, or Module B, can be disposed. Under this setting, we investigate three options: (i) recycling of Module A, (ii) remanufacturing of Module B, and (iii) recycling of Module A and remanufacturing of Module B.

We first provide a complete theoretical characterization of the regions of optimality of each option. Next, we study the impact of choosing an option in an ad-hoc manner on the manufacturer’s profit and analyze the sensitivity of this impact to changes in the supply-demand gap and the take-back fraction. Recognizing that emerging governmental regulations render the disposal cost particularly vulnerable to dis-economies of scale, we examine the impact of non-linear disposal cost on the (i) optimal amount recycled or remanufactured and (ii) choice of an optimal operational strategy. To obtain richer managerial insights, we introduce the concept of “ability of sustainability”, defined as a joint measure of the fraction of green consumers in the market, the take-back fraction, and product design characteristics such as the degree of substitutability of material, and examine its influence on the optimal option. Useful insights are developed on the sensitivity of the optimal choice to the relative profitabilities of the remanufacturing and recycling operations. Finally, based on the demand for the remanufactured product, we also analyze the cases when green consumers are flexible and when they are dedicated.

Time: Monday, December 6th 11:15-12:20 pm

Location: Duques 652 (2201 G Street, NW)

Anatomy of the Failure Rate Function: A Mathematical Dissection

Speaker: Nozer D. Singpurwalla, Professor of Statistics and of Decision Sciences, GWU


This is an expository talk. It is motivated by two recent developments. One, is a reviewer’s comments on my recent book; the other is a presentation by one of our speakers at the 75-th Annivarsary Meeting of the Department, who was flirting with the meaning of “risk”. The notion of the failure rate function is perhaps the main contribution of reliability, survival analysis, and acturial science to probability, with the exponentiation formula for survival being its main export. This formula is commonly used, its most recent client being mathematical finance. However, there are several caveats to the notion of the failure rate function, including a paradox that it spawns. These caveats make the exponentiation formula inexact and the paradox difficult to accept. In this talk, I will try to point out the caveats, introduce the notion of the product integral, and explain away the paradox via an animated example which includes some of my colleagues as characters in a mental game.

Time: Friday, December 3rd 3:00-4:00pm

Location: Duques 453 (2201 G Street, NW), followed by wine and cheese reception

A Unified Competing Risks Limited-Failure Model

Speaker: Sanjib Basu, Northern Illinois University


A competing risks framework refers to multiple risks acting on a system. This can result from multiple components or multiple failure modes and are often conceptualized as a series system. A limited-failure model postulates a fraction of the systems to be failure-free and can be formulated as a mixture model, or alternatively by a bounded cumulative intensity model. We develop models that unify the competing risks and limited-failure approaches. We describe Bayesian analysis of these models, and discuss conceptual, methodological and computational issues related to model fitting and model selection. We compare the performances of the two limited failure approaches and illustrate in application.

Location: Duques 360 (2201 G Street, NW)

Choice-Based Revenue Management

Speaker: Garrett Van Ryzin,Columbia University Graduate School of Business


Using consumer choice models as a basis for revenue management (RM) is appealing on many levels. Choice models can naturally model important buy-up and diversion phenomenon and can be applied to newer, undifferentiated low-fare structures and dynamic pricing problems. And recent research advances have now brought choice-based RM within striking distance of being truly practical. In this talk, we survey the recent research results in this area and discuss their implications for RM research and practice.

Time: Friday, October 22nd 2:30-3:30 pm

Location: Funger Hall 520 (2201 G Street, NW)

Bayesian Grouped Factor Models

Speaker: Merrill Liechty, LeBow College of Business, Drexel University


Firms that are publicly traded are classified based on their business models (i.e., how they make money) through industry classifications and based on their financial strength through debt ratings. As these classifications are based on the judgment of experts, it is an interesting question to determine the extent to which these classifications could be used to form prior distributions for correlation structures. Using a variable dimension Bayesian grouped factor model and standard classification schemes, we explore the value of these schemes with respect to model fit criteria, variance estimates of a tangency portfolio and value at risk calculations. In addition we demonstrate how this modeling framework can be used to include a firm which has just transitioned from being a privately held company to a publicly traded company with regards to asset allocation and risk assessment system.

Time: Friday, October 8th 3:30-4:30pm

Location: Duques 553 (2201 G Street, NW)

Obesity Index

Speaker: Roger M. Cooke, Resources for the Future and Delft University of Technology


Current notions of tail fatness or tail obesity rely on estimates of the density for extreme values. For example the index of regular variation requires that, after an initial segment, the distribution is approximately Pareto, and the mean excess function is approximately linear. Loss data we have studied are (a) very rich, (b) very fat tailed and (c) not remotely Pareto. This paper explores a measure of tail obesity for positive random variables which characterizes tail obesity in samples, and can be computed for familiar classes of distributions. If X1,…,X4 are independent samples of positive random variable X, define Obx(X) = P{X1 + X4> X2 + X3 | X1 > X2 > X3 > X4}, capturing the intuition, “the fatter the tail, the more the sum behaves like the max”. Properties of Obx will be described in the talk.

Time: Friday, October 1st, 2010, 3:30 pm – 4:30 pm (Followed by wine and cheese reception)

Location: Funger Hall 320 (2201 G Street, NW)

Learning Consumer Tastes through Dynamic Assortment

Speaker:Dorothee Honhon, University of Texas, Austin


How should a firm modify its product assortment over time when learning about consumer tastes? In this paper, we study assortment decisions on a horizontally differentiated product category where consumers’ tastes can be represented on a Hotelling line. We model this problem as a discrete time dynamic program; each period, the firm chooses an assortment to maximize total expected profits where the expectation is taken with respect to the firm’s subjective beliefs over consumer tastes. The consumers then choose a product from the assortment that maximizes their own utility and the firm observes sales, which provide censored information on consumer tastes, and updates beliefs using Bayes’ rule. The tradeoff is between the immediate profits from the sales and the informational gains. We show that it may be optimal for the firm to offer assortments that lead to losses in the current period in order to learn about consumer tastes. We also show that we can (partially) order assortments based on their information content and that the optimal assortment cannot be less informative than the myopically optimal assortment. This result is akin to the well-known ‘stock more’ result in newsvendor problems when the newsvendor is learning about demand through sales and lost sales are not observed. We also develop a Bayesian conjugate model that reduces the state space of the dynamic program and explore the properties of the value function and optimal policies.

Time: Wednesday, September 22nd, 2010, 11:00 am -12:15 pm

Location: Funger Hall 320 (2201 G Street, NW)

Spring 2010

Adventures in Sports Scheduling

Speaker:Michael Trick,
Carnegie Mellon University Tepper School of Business


Major League Baseball is a multi-billion dollar per year industry that relies heavily on the quality of its schedule. Teams, fans, TV networks, and even political parties (in a way revealed in the talk) rely on the schedule for profits and enjoyment. Only recently have the computational tools of operations research been powerful enough to address the issue of finding optimal schedules. I will discuss my experiences in scheduling college basketball, major league baseball, and other sports, and discuss major trends in optimization that lead to practical scheduling approaches.

Time: Friday, April 30th 11:00 am -12:15 pm

Location: Duques 652 (2201 G Street, NW)

Stochastic Integer Programming Models for Air Traffic Flow Management Problems

Speaker: Michael O. Ball,

Robert H Smith School of Business & Institute for Systems Research, University of Maryland


In this paper we address a stochastic air traffic flow management problem. Our problem arises when airspace congestion is predicted, usually because of a weather disturbance, so that the number of flights passing through a volume of airspace (flow constrained area, FCA) must be reduced. We
formulate an optimization model for the assignment of dispositions to flights whose preferred flight plans pass through an FCA. For each flight, the disposition can be either to depart as scheduled but via a secondary route, or to use the originally intended route but to depart with a controlled (adjusted) departure time and accompanying ground delay. We model the possibility that the capacity of the FCA may increase at some future time once the weather activity clears. The model is a two-stage stochastic
program that represents the time of this capacity windfall as a random variable, and determines expected costs given a second-stage decision, conditioning on that time. A novel aspect of our model is to allow the initial secondary routes to vary from pessimistic (initial trajectory avoids weather entirely) to optimistic (initial trajectory assumes weather not present). We conduct experiments allowing a range of such trajectories and draw conclusions regarding appropriate strategies.

Time: Friday, March 12, 2010, 3:30-4:30 pm

Location: Duques 553 (2201 G Street, NW)

Multi- and Matrix-variate Times Series & Graphical Models

Speaker: Mike West
Department of Statistical Science, Duke University


I will review some recent and current developments in Bayesian modelling of multi- and matrix-variate time series, all involving the integration of graphical modelling ideas and methods with dynamic models. This includes graphical models to constrain multivariate stochastic volatility models in financial applications and extensions to matrix-variate times series with economic examples. Stochastic simulation and search for Bayesian computations in these models are key and will be discussed, as will some current research frontiers. The talk covers developments from projects in collaborations with current and past students Carlos Carvalho, Craig Reeson and Hao Wang.

Time: Friday, March 5, 2010, 4:00-5:00 pm

Location: Duques 553 (2201 G Street, NW)

Network Routing in a Dynamic Environment

Speaker: Nozer D. Singpurwalla
Department of Statistics, The George Washington University


Network routing as done by network theorists, computer scientists, and operations research analysts, assumes that the failure probabilities of nodes and links are fixed and known. In many cases, this is an idealization. A case in point is the routing of material and personnel in the presence of improvised explosive devices (IED). The placement of IED’s by an active adversary makes the underlying probabilities dynamic. Assessing these probabilities calls for the pooling of data from diverse sources and the modeling of socio psychological behaviour of the adversary and the route planner. The situation is unconventional. In this talk I present a Bayesian approach for accomplishing the above. My approach has two novel features. The first is a strategy for specifying likelihoods that encapsulate adversarial behaviour, and the second is the generation of likelihoods empirically by sampling from the posterior distribution of a logistic regression.

Time: Friday, January 29, 2010, 4:00-5:00 pm

Location: Duques 553 (2201 G Street, NW)

A Strategic Perspective on Reverse Channel Design: Why a Less Cost-efficient Product Returns Channel Would Improve Manufacturer Profits

Speaker: Canan Savaskan-Ebert
R.H. Smith School of Business, University Maryland
Kellogg School of Management, Northwestern University


A homeowner buys wallpaper only to find out that it does not look as good as anticipated in the room and decides to incur a 30% restocking fee to return it. A photographer pays a 20% restocking fee to return a lens after discovering that a lens with a different focal length would be better suited for his subject. A businessman buys a new smartphone and realizes its trade-off between battery life and functionality does not fit with his lifestyle. These are just a few examples of product returns, a key cost factor that represents a great financial concern for sellers. In fact, product returns cost U.S. companies more than $100 billion annually. It is estimated that the U.S. electronics industry alone spent $13.8 billion dollars in 2007 to restock returned products (Lawton 2008). The bulk of these returns were non-defective items that simply weren’t what the consumer wanted. It is clear that product returns from consumers are costing companies a substantial amount of money. What is not as clear is who should pay for the cost and who should take responsibility for the returned units.

To eliminate returns and/or to recoup the cost of handling returns, many retailers today are adopting the practice of charging restocking fees to consumers as a penalty for making returns. In this paper, we employ an analytical supply chain model of a bilateral monopoly to examine how the product return policy, product prices, consumer demand and product return rates are affected by the choice of the agent (manufacturer or the retailer) who takes assumes responsibility for taking back and salvaging returned products. This study provides an explanation for why some manufacturers may take back and salvage consumer returns even though the retailer can more effectively and cost efficiently do so. Counter to common intuition, we show that the return penalty may be more severe even when returns are salvaged by a channel member who derives greater value from a returned unit. The manufacturer may earn greater profit by accepting returns even if the retailer has a more efficient outlet for salvaging units. As one of the very first studies on this topic, in a supply chain context, this paper shows that by assuming returned product responsibility, the manufacturer can use the refund scheme in the reverse channel as a means to align incentives in the forward supply process.

Time: Thursday, January 28, 2010, 11:00-12:00 noon

Location: Funger Hall 520

Coordinating Semi-Conductor Supply Chains

Speaker: Mehmet Altug
Department of Decision Sciences, The George Washington University


This talk is based on three related supply chain coordination problems that are motivated by working with a global semi-conductor manufacturer. First, we study a problem observed between the semi-conductor manufacturer and its distributors. We consider a vertically differentiated model, where the manufacturer makes a high and low quality (performance) product and sells these to a single distributor which in turn needs to price and sell them to a market with consumers that have heterogeneous valuations for quality. We determine the economic distortions that undermine both the sell-up (selling more of the higher quality part) and sell-through (volume) objective of the manufacturer. To align the economics of both parties, we analyze and compare several contracts. We then extend our model to multiple distributors to understand the effect of Cournot competition and derive an efficiency result under wholesale pricing as competition increases.

In the second half of the seminar, we consider the quality selection problem between the semi-conductor manufacturer and its resellers. In such a multi-supplier-one manufacturer type environment, each supplier sells a different component with a predefined quality range. The reseller has to decide on what quality to choose for each component as it is assembling them into a computer and trade-off between the total cost and the total quality of the computer both of which increases with individual quality levels of the components. Based on a model that translates these individual quality levels into one final product quality, we first define the manufacturer’s strategic design problem. We then characterize the strategic interaction among the suppliers and show what kind of inefficiencies in such systems could occur. Finally, we present the impact of gray market on supply chain coordination problems and explain why some of the contracts studied earlier may not be efficient in the presence of such gray markets.

Time: Friday, January 22nd 3:30-4:30 pm

Location: Funger Hall 620

Durable Products, Time Inconsistency, and Lock-in

Speaker: Sreelata Jonnalagedda
McCombs School of Business, The University of Texas at Austin


Many durable products cannot be used without a contingent consumable product, e.g. printers require ink, iPods require songs, razors require blades, etc. For such products, manufacturers may be able to lock-in consumers by making their products incompatible with consumables that are produced by other firms. We examine the effectiveness of such a strategy in the presence of strategic consumers who anticipate the future prices of both the durable product and the contingent consumable. On the one hand, by locking-in consumers to its own contingent consumable, a durable goods manufacturer can dampen its own incentive to reduce durables prices over time, thereby mitigating the classic time inconsistency problem. On the other hand, lock-in will also create a hold-up issue and will adversely affect consumers’ expectations of future prices for the contingent consumable. We demonstrate the trade-off between these two issues, time inconsistency and hold-up, and derive analytical results that provide insights about the conditions under which a lock-in strategy can be effective.

Time: Tuesday, January 19th 11:00-12:00 noon

Location: Funger Hall 520

Fall 2009

Combinatorial Optimization from Down Under

Speaker: Vicky Mak
School of Information Technology, Deakin University, Victoria, Australia


In this talk, I will give an overview of my experience in the modelling, solving, and some theoretical analysis of a number of combinatorial optimization problems. These include: (1) The polyhedral analysis and branch-and-bound based methods in obtaining exact solutions and to a number of VRP family and TSP-family problems. Areas of application: aircraft rotation and network design. (2) A new idea for exact solutions for integer programs: Iterative Aggregation and Disaggregation. (3) Constraint programming-based algorithm for the treatment planning optimization of Intensity-modulated radiotherapy. (4) Other optimization problems in the areas of frequency assignment, machine scheduling, and robotics routing.

Time: Friday, December 18, 11:00-12:00 noon

Location: Funger Hall 520 (2201 G Street, NW)

Time Allocation Strategies for Entrepreneurial Operations Management

Speaker: Onesun Steve Yoo
Anderson School of Management, University of California at Los Angeles


For many entrepreneurs, the main bottleneck resource of their company is their own time, rather than cash. In this paper, we develop a dynamic time-management framework for entrepreneurial process improvement for contexts where time is more constrained than cash, and provide clear guiding principles for time management. We classify an entrepreneur’s daily activities into four categories: fire-fighting, process improvement, revenue enhancement, and revenue generation, and analyze a stylized dynamic time allocation problem for maximizing long-term expected profits. We find that entrepreneurs should first invest time in process improvement until the process reliability reaches a certain threshold, then in revenue enhancement until the revenue rate reaches a certain threshold, and only then spend time generating revenue. Also, entrepreneurs with lower initial revenue rates should invest more time in process improvement and in revenue enhancement, ultimately earning revenue at a higher rate than if they were endowed with a higher initial revenue rate. Our model formally links time with money and introduces a framework for evaluating the opportunity cost of an entrepreneur’s time. We highlight the performance difference between the optimal policy and two commonly employed (well-intentioned) time management heuristics and show that working hard does not necessarily imply good time management.

Time: Wednesday, December 16th 10:45-12:00 noon

Location: Duques 553

The Impact of Take-Back Legislation on Remanufacturing

Speaker: Gokce Esenduran
Kenan-Flagler Business School, The University of North Carolina at Chapel Hill


We consider an original equipment manufacturer (OEM) in an industry regulated with take-back legislation that holds the OEM responsible for either taking back and properly treating products that reach the end of their life cycles or facilitating such take back and proper treatment through third parties. We use a stylized model of take-back legislation and consider three levels of legislation (no takeback legislation, legislation on collection levels, legislation on collection and reuse levels) in two different supply-chain settings. We aim to understand whether legislation causes an increase in remanufacturing levels and if it can induce OEMs to manufacture products that are easier and cheaper to remanufacture. We first analyze the effects of legislation on an OEM with in-house remanufacturing capabilities. We find that if the manufacturing cost is very low, legislation on collection levels does not induce remanufacturing while if the cost is high, legislation may be redundant. While take-back legislation never causes a decrease in remanufacturing levels, it may cause an increase in the price of remanufactured goods and a decrease in the level of new-product manufacturing. If the OEM does
not remanufacture in-house and competes with a third-party remanufacturer instead, contrary to our earlier result, we find that legislation may cause a decrease in remanufacturing levels. But surprisingly, when we compare the effect of legislation on an OEM with in-house remanufacturing versus one competing with third-parties, remanufacturing levels may be higher in the latter for the same level of legislation. Finally we find that take-back legislation does induce OEMs to manufacture products that are easier and cheaper to remanufacture.

Time: Friday, December 10, 11:00-12:00 noon

Location: Funger Hall 320 (2201 G Street, NW)

Real-time Delay Estimation in Customer Service Systems

Speaker: Rouba Ibrahim
Department of Industrial Engineering and Operations Research, Columbia University


Motivated by the desire to make delay announcements to arriving customers, we study alternative ways of estimating customer delay in many-server service systems. Our delay estimators differ in the type and amount of information that they use about the system. We introduce estimators that effectively cope with real-life phenomena, such as customer abandonment (impatience), time-varying arrival rates, and general service-time distributions. We use computer simulation and heavy-traffic analysis to
verify that our proposed estimators outperform several natural alternatives.

Time: Friday, December 9, 11:00-12:00 noon

Location: Funger Hall 320 (2201 G Street, NW)

Social Technology

Speaker: Marti A. Hearst
School of Information, University of California, Berkeley


We are in the midst of extraordinary change in how people interact with one another and with information. A combination of advances in technology and change in people’s expectations is altering the way products are sold, scientific problems are solved, software is written, elections are conducted, and government is run. People are social animals, and as Shirky notes, we now have tools that are flexible enough to match our in-built social capabilities. Things can get done that weren’t possible before because the right expertise, the missing information, or a large enough group of people can now be gathered together at low cost. These developments open a number of interesting research questions and potentially change how scientific research should be conducted. In this talk I will attempt to summarize and put some structure around some of these developments.

Time: Friday, November 20th, 11:00-12:00 noon

Location: Duques Hall 254 (2201 G Street, NW)

What Data Mining Teaches Me About Teaching Statistics

Speaker: Dick De Veaux
Department of Mathematics and Statistics, Williams College


Data mining has been defined as a process that uses a variety of data analysis and modeling techniques to discover patterns and relationships in data that may be used to make accurate predictions and decisions. Statistical inference concerns the same problems. Are the two really different? Through a series of case studies, we will try to illuminate some of the challenges and characteristics of data mining. Each case study reminds us that the important issues are often the ones that transcend the methodological choice one faces when solving real world problems. What lessons can these teach us about teaching the introductory course?

Time: Thursday, November 19 4:00-5:00 pm

Location: Funger Hall 620 (2201 G Street, NW)

Algorithms for Computing Nash Equilibria of Large Sequential Games

Speaker: Javier Pena
Tepper School of Business, Carnegie Mellon University


Finding a Nash equilibrium of an extensive form game is a central problem in computational game theory. For a two-person, zero-sum game this problem can be formulated as a linear program, which in principle is solvable via standard algorithms such as the simplex or interior-point methods. However, most interesting games lead to enormous linear programs that are beyond today’s computational capabilities. We propose specialized algorithms that tailor modern smoothing techniques to the highly structured polytopes that arise in the Nash equilibrium formulation. We discuss computational results with instances of poker, whose Nash equilibrium formulation has about nearly a billion variables and a billion constraints.

Time: Friday, November 13th 3:30-4:30 pm

Location: Duques Hall 553 (2201 G Street, NW)

Attitudes Towards Firm and Competition: How do they Matter for CRM Activities?

Speaker: Nalini Ravishanker
Department of Statistics, University of Connecticut


Easy availability of information on a customer’s transactions with the firm and the pressure to establish financial returns from marketing investments has led to a dominance of models that directly connect marketing investments to sales at the customer level. Customer’s attitudes, on the other hand, have always been assumed to influence customer’s reactions to a firm’s marketing communications, but rarely included in models that determine customer value. We empirically assess (a) the role of customer’s attitudes in determining their value to the firm, and (b) how knowledge of customer attitudes can influence a firm’s customer management strategy. Specifically, we evaluate which aspects of attitudes, i.e., attitudes towards firm or competition, have a bigger effect on customer behavior, and whether customer attitudes are more important for managing some customers than others. We use monthly sales call, sales, and survey based attitude information collected over three years from the same customers of a multinational pharmaceutical firm for this study. We develop a hierarchical generalized dynamic linear model (HGDLM) framework that combines the sales call and sales data that are available at regular time intervals, with customer attitudes that are not available at regular intervals, and carry out inference in the Bayesian framework.

Time: Monday, October 19th 4:00-5:00 pm

Location: Funger Hall 520 (2201 G Street, NW)

Combining Simulations and Physical Observations to Estimate Cosmological Parameters

Speaker: David Higdon
Statistical Sciences, Los Alamos National Laboratory


The Lambda-Cold Dark Matter (LCDM) model of cosmology is perhaps the simplest model that best describes the makeup and evolution of the universe in accordance with physical observations. This model contains up to 20 different cosmological parameters from space and ground based surveys.
These cosmological measurements have reached a remarkable level of accuracy over the last decade. Future sky surveys promise to give even more numerous and more accurate data. However, such data does not inform directly about the cosmological parameters of interest. Detailed physical simulation models are typically required to relate information from these surveys to cosmological parameters of interest. A Bayesian formulation adapted from Kennedy and O’Hagan (2001) and Higdon et al. (2008) is used to give parameter constraints from physical observations and a limited number of simulations. The framework is based on the idea of replacing the simulator by an emulator which can then be used to facilitate computations required for the analysis. In this talk I’ll describe an application that uses large scale structure and Cosmic Microwave Background (CMB) data to inform about a subset of the
parameters controlling the LCDM model.

Time: Friday, October 9th 4:30-5:30 pm (Followed by wine & cheese reception)

Location: Duques Hall 652 (2201 G Street, NW)

Wired for Survival The Rational (and Irrational) Choices We Make, from the Gas Pump to Terrorism

Speaker: Margaret M. Polski
George Mason University


Join the GW School of Business Institute for Integrating Statistics in Decision Sciences (I2SDS), the Department of Decision Sciences, and the Elliott School of International Affairs in hosting Dr. Polski for an informative book discussion. The event is free and open to the public.

Margaret M. Polski is a political economist with research interests in growth, innovation, regulation, and security. She has more than 25 years of experience developing and implementing transformation initiatives in business, government, and civic affairs. Dr. Polski has a Ph.D. from Indiana University, an M.P.A. from the Kennedy School of Government at Harvard University, and a B.E.S. from the University of Minnesota. She is a Research Affiliate at the Krasnow Institute for Advanced Study at George Mason University and a Research Fellow at the Institute for Development Strategies at the School for Public and Environmental Affairs at Indiana University.

Time: Wednesday, September 30th 3:00 – 4:00 pm

Location: Duques Hall 652