Statistical Models for Improving Prognosis of Heart Failure: Hazard Reconstruction, Clustering and Prediction of Disease Progression
Speaker: Francesca Ieva, University of Milano, Department of Mathematics
Heart Failure (HF) is nowadays among the leading causes of repeated hospitalisations in over 65 patients. The longitudinal dataset resulting from the discharge papers and its analysis are consequently becoming of a great interest for clinicians and statisticians worldwide in order to have insights of the burden of such an extensive disease. We analysed HF data collected from the administrative databank of an Italian regional district (Lombardia), concentrating our study on the days elapsed from one admission to the next one for each patient in our dataset. The aim behind this project is to identify groups of patients, conjecturing that the variables in our study, the time segments between two consecutive hospitalisations, are Weibull differently distributed within each hidden cluster. Therefore, the comprehensive distribution for each variable is modeled by a Weibull Mixture. From this assumption we developed a survival analysis in order to estimate, through a proportional hazards model, the corresponding hazard function for the proposed model and to obtain jointly the desired clusters. We find that the selected dataset, a good representative of the complete population, can be categorized into three clusters, corresponding to “healthy”, “sick” and “terminally ill” patients. Furthermore, we attempt a reconstruction of the patient-specific hazard function, adding a frailty parameter to the considered model.
Friday, November 14th, 11:00 am –12:00 noon
Software testing analytics: Examples from the evaluation of an independent software testing organization and the design of dynamic web testing
Speaker: Ron S. Kenett, KPA Ltd.,Israel
Testing software is an activity required by customers and dictated by economic considerations. The talk will consist of two parts related to software testing. In the first part, the talk will present an assessment of the effectiveness of an independent testing organization in terms of Escaping Defects, which are defects that were missed by the testing team and were detected by the users after the code was deployed. The talk will focus on the analysis of test data using the COM-Poisson regression model that can handle under-dispersion and over-dispersion relative to the Poisson model. Such data is common in testing and the application of Poisson or Negative Binomial models is not flexible enough to fit the data.. In the second part, testing of dynamic web services is considered in the context of risk based group testing. The approach is used for selecting and prioritizing test cases for testing service-based systems in the context of semantic web services. This work analyzes the two factors of risk estimation: failure probability and importance, from three aspects: ontology data, service and composite service. With this approach, test cases are associated to semantic features, and are scheduled based on the risks of their target features. Risk assessment using Bayesian networks is used to control the process of Web Services progressive group testing, including test case ranking, test case selection and service ruling out.
Wednesday, November 5th, 11:00 am –12:00 noon
Bayesian Analysis of Traffic Flow Data
Speaker: Vadim O. Sokolov, Argonne National Labs Transportation Systems Modeling Group
In this talk we consider the problem of estimating the state of traffic flow using filtering techniques that rely on an analytical model of traffic flow. The goal is to get as accurate estimation as possible of the current traffic conditions based on the sparse and noisy measurement from in-ground induction loop detectors. In practice this information is provided to travelers, who make decisions on routes, and transportation system managers that use it for forecasting traffic conditions for the next 15-30 minutes in order to apply appropriate control strategies, such as route guidance or flow control through ramp metering. Existing filtering algorithms are limited in their ability to properly capture the nonlinear nature of the system dynamics, mixture nature of state uncertainty as well as non-Gaussian sensor models. Here we develop and apply a computationally efficient particle filter based algorithm to address this problem. We apply our algorithm to a data set of measurements from the Illinois interstate highway system.
Friday, October 24th, 11:00 am –12:00 noon
Hedging Demand and Supply Risks in Inventory Models
Speaker: Süleyman Özekici, Department of Industrial Engineering, Koç University, Istanbul, Turkey
We consider a single-period inventory model where there are risks associated with the uncertainty in demand as well as supply. Furthermore, the randomness in demand and supply is correlated with the financial markets. Recent literature provides ample evidence on this issue. The inventory manager may then exploit this correlation and manage his risks by investing in a portfolio of financial instruments. The decision problem therefore includes not only the determination of the optimal ordering policy, but also the selection of the optimal portfolio at the same time. We analyze this problem in detail and provide a risk-sensitive approach to inventory management where one considers both the mean and the variance of the resulting cash flow. The analysis results in some interesting and explicit characterizations on the structure of the optimal policy. The emphasis is on the impact of hedging and risk reduction.
Friday, October 17th, 11:00 am –12:00 noon
Economic and Environmental Assessment of Remanufacturing Strategies for Product + Service Firms
Speaker: Gal Raz, Darden School of Business, University of Virginia
This article provides a data-driven assessment of economic and environmental aspects of remanufacturing for product+ service firms. A critical component of such an assessment is the issue of demand cannibalization. We therefore present an analytical model and a behavioral study which together incorporate demand cannibalization from multiple customer segments across the firm’s product line. We then perform a series of numerical simulations with realistic problem parameters obtained from both the literature and discussions with industry executives. Our findings show that remanufacturing frequently aligns firms’ economic and environmental goals by increasing profits and decreasing the total environmental impact. We show that in some cases, an introduction of a remanufactured product leads to no changes in the new products’ prices (positioning within the product line), implying a positive demand cannibalization and a decrease in the environmental impact; this provides support for a heuristic approach commonly used in practice. Yet in other cases, the firm can increase profits by decreasing the new product’s prices and increasing sales—a negative effective cannibalization. With negative cannibalization the firm’s total environmental impact often increases due to the growth in new production. However, we illustrate that this growth is nearly always sustainable, as the relative environmental impacts per unit and per dollar rarely increase.
Friday, October 10th, 11:00 am –12:00 noon
- Center for the Connected Consumer
- Center for Entrepreneurial Excellence (CFEE)
- Center for International Business Education and Research (CIBER)
- Consulting Practicum
- Senior Research Scholars & Fellows
- Center for Latin American Issues (CLAI)
- Center for Real Estate & Urban Analysis (CREUA)
- Professional Services
- European Union Research Center (EURC)
- Global Financial Literacy Excellence Center (GFLEC)
- The Growth Dialogue
- Institute for Brazilian Issues
- Institute for Corporate Responsibility
- Institute for Integrating Statistics in Decision Sciences
- International Institute of Tourism Studies (IITS)
- Korean Management Institute (KMI)
2201 G Street, NW
Washington, D.C. 20052