Abstract
In the semiconductor industry, yield is a critical metric significantly impacting fabrication efficacy. Accurate yield prediction is essential for estimating fabrication costs and mitigating risks associated with low yield. Recent advancements in data collection have led to the increased prevalence of data-driven yield prediction using equipment usage data. However, data-driven yield prediction using equipment usage data needs crucial temporal views, as wafer exposure time to air and humidity can negatively affect yield. This study addresses these limitations through a case study incorporating equipment usage data and Accumulated Cycle Time (ACT) data from a leading semiconductor manufacturer's production log data. This study presents a dual-view yield prediction approach, combining ACT and equipment usage data through a three-phase approach: (1) transforming production log data into equipment usage and ACT datasets, (2) implementing a dual-view neural network model with a custom layer architecture, and (3) conducting post-analysis to identify yield-influencing factors. By incorporating ACT data, the dual-view yield prediction approach addresses the limitations of using equipment usage data alone, providing essential temporal views into chemical property alterations affecting yield prediction. SHapley Additive exPlanations (SHAP) are also employed to enhance dual-view yield prediction model interpretability. Results demonstrate improved yield prediction performance compared to conventional single-view approaches, offering an understanding of equipment condition and temporal views in fabrication. The dual-view yield prediction approach improves prediction performance by up to 78.61%. Moreover, the dual-view yield prediction approach in the case study provides actionable insights for yield improvement, particularly in critical areas such as the wet etch process step, where the temporal views of fabrication play a crucial role in determining actual yield. By bridging the gap between equipment condition and temporal views of semionductor fabrication, this dual-view approach represents a significant advancement in yield prediction approaches, potentially leading to more efficient production process steps, reduced waste, and improved overall fabrication performance in the semiconductor industry.
Abstract
Latent defect chips in semiconductor fabrication present significant challenges to product quality and fabrication efficiency. These latent defect chips exhibit characteristics similar to good chips during wafer tests but fail in subsequent back-end processes or customer usage, rendering detection exceptionally difficult. Traditional latent defect chip detection approaches relied on adjusting test voltages based on engineering knowledge. However, the adjusted voltage wafer test, while able to detect latent defect chips, risks overburdening the good chips in terms of quality and reliability. Recent approaches have shifted towards data-centric methods, utilizing electrical characteristic data from wafer tests. While these data-centric approaches have improved latent defect chip detection, they often overlook critical spatial factors influencing defect formation, such as wafer yield, proximity to wafer edges, and yield rates of adjacent chips. To address these limitations, a spatial-contextual approach expanding upon data-centric approaches for latent defect chip detection is proposed. This novel approach defines and learns latent defect chip probability with the spatial-context feature for each wafer coordinate, incorporating spatial factors into a deep learning model. The integration of spatial-contextual features effectively supports the learning of latent defect chip probability during the analysis of electrical characteristic data from wafer tests. The effectiveness of this spatial-contextual approach was evaluated using real-world wafer test data, demonstrating significant improvement in latent defect chip detection performance compared to conventional data-centric approaches. By integrating spatial-contextual features with advanced deep learning techniques, this approach bridges the gap between conventional data-centric approaches that use data in isolation and the complex spatial realities of semiconductor fabrication. The proposed approach underscores the importance of considering spatial-contextual features in developing latent defect detection models for complex fabrication, offering practical implications for enhanced quality control and fabrication efficiency.
Abstract
Latent defect chips in semiconductor manufacturing present significant challenges to product quality and fabrication efficiency. These latent defect chips exhibit characteristics similar to functional chips during wafer tests but fail in subsequent back-end processes, rendering detection exceptionally difficult. Traditional detection approaches relied on adjusting test voltages based on engineering knowledge. Recent approaches have shifted towards data-centric approaches, utilizing electrical characteristic data from wafer tests. While these data-centric approaches have improved latent defect chip detection, they often overlook critical spatial factors influencing defect formation, such as low wafer yield, proximity to wafer edges, and yield rates of adjacent chips. To address these limitations, a spatial-contextual approach expanding upon data-centric approaches for latent defect chip detection is proposed. This novel approach defines and learns latent defect chip probability for each wafer coordinate, incorporating spatial factors into a deep learning model. The integration of spatial-contextual information effectively supports the learning of feature importance during the analysis of electrical characteristic data from wafer tests. The effectiveness of this spatial-contextual approach was evaluated using real-world wafer test data, demonstrating significant improvement in latent defect chip detection performance compared to conventional data-centric approaches. By integrating spatial-contextual information with advanced deep learning techniques, this approach bridges the gap between conventional approaches that use data in isolation and the complex spatial realities of semiconductor manufacturing. The proposed approach underscores the importance of considering spatial-contextual information in developing latent defect detection models for complex fabrication, offering practical implications for enhanced quality control and fabrication efficiency.
Abstract
Accurate yield prediction is crucial in the semiconductor industry for optimizing processes and reducing fabrication costs associated with defective products. Recent data-driven approaches to yield prediction leverage production log data, including process cycle time (CT), rework levels, equipment usage, and detailed process specifics. CT, defined as the interval between the completion of one process step and the next, is important for yield prediction, as excessive CT can lead to wafer contamination. To effectively use CT in yield prediction, two key considerations must be addressed. First, CTs of adjacent processes should be accumulated and monitored, rather than using individual process CTs. Second, only those CTs that show a strong relation with wafer yield should be selected for prediction. Traditional methods rely on engineers' subjective empirical knowledge, which can lead to inaccuracies and inefficiencies. This study proposes a data-driven method for dynamic CT accumulation and selection to address these limitations. The proposed method uses random number generation to detect changes in yield prediction when CTs from adjacent process steps are accumulated. Additionally, it employs an information gain method for feature selection to eliminate irrelevant CTs. Data from a semiconductor company demonstrates that this data-oriented approach significantly enhances wafer yield prediction performance. The proposed method provides a more objective alternative to current subjective approaches, advancing yield prediction in the semiconductor industry through effective cycle time accumulation.
Abstract
Quality 4.0 represents a transformative paradigm in industrial qual ity management, integrating advanced technologies like the Industrial Internet of Things, artificial intelligence, and big data analytics. It en hances product quality and operational e ciency, extending beyond compliance and defect detection to focus on proactive measures, pre dictive analytics, and continuous improvement. Despite ongoing debates about the appropriateness of the term Qual ity 4.0, embracing it implies a shift from reactive problem-solving to a proactive and preventive approach. Hence, research on Quality 4.0 is essential for advancing knowledge, preparing future professionals, and critically evaluating the implications of integrating advanced technolo gies into quality management practices. The talk comprises two parts. First, recent trends in Quality 4.0 will be presented, covering the features and benefits of Quality 4.0. Ap plication cases in Korean industries like steelmaking and consumer electronics will also be discussed. Second, potential research direc tions in Quality 4.0 for the operations research community will be explored, such as developing dynamic quality management models, simulating quality processes in autonomous manufacturing environ ments, and measuring and optimizing supply chain quality. These di rections leverage operations research expertise to address complexities and challenges in implementing Quality 4.0 in manufacturing and sup ply chain contexts.
Abstract
Wafer map defect pattern represents the spatial pattern of defective chips on a wafer and provides crucial information for root-cause identification in the wafer fabrication process. Wafer map defect pattern classification models have been employed for efficient root-cause identifications. However, the learning process of classification models requires a huge labeled wafer map dataset, and manual labeling is expensive. To address this issue, active learning is employed to select informative wafers to be queried for manual labeling, and the newly labeled wafers are fed into the learning process to accelerate it with a small dataset. The criterion for selection is called the query strategy. Existing query strategies for wafer map defect pattern mainly use the uncertainty in predicting the class (defect pattern). Since the wafer map dataset has data imbalance problem, wafers selected by the uncertainty mainly distributed on majority classes. Thus, the classification performance on minority classes is hardly achieved during the active learning. This study proposes a contrastive learning-based query strategy to preferentially select wafers with the minority class when data imbalance resides in wafer map dataset. Contrastive learning learns function that embeds wafers to representation space in which wafers with the same class stay close to each other but those with different classes are far apart. Then, unlabeled wafers are embedded to the representation space, and the unlabeled wafers that are close to the labeled wafers with the minority class are selected for query. The proposed query strategy is compared to the existing strategy using a public wafer map dataset. The results showed that the proposed strategy improved the performance by capturing the minority class faster than the existing one. The proposed strategy would contribute to efficient learning of the wafer map defect pattern classification.
Abstract
The Serial-Parallel Multistage Manufacturing Process (SP-MMP) involves multiple process stages, each offering multiple alternative machines. The final product is produced through consecutive process stages, with one alternative machine assigned to the product in each stage. End-of-line (EOL) testing is conducted after the final process stage to filter out defective products. Predicting the final product quality before EOL testing can reduce the quality cost associated with EOL testing. To achieve this, production log data have been analyzed for the quality prediction. Production log data record the sequence of process stages and machines assigned to produce the product. Previous studies have utilized complex models, such as ensemble models or deep learning models, to improve predictive performance. However, the high complexity of the models hinders the understanding of the model behavior, making it challenging to investigate the root-cause of defective products. This study proposes a method for interpreting predictive model to obtain insights into root-cause identification in the SP-MMP. First, we introduce a multi-modal transformer network that considers two modalities, the sequences of process stages and machines, to predict the final quality. Subsequently, an interpreting model is proposed for the multi-modal transformer network, aiming to identify the root-cause of defective product production. The interpreting model takes into account three factors that mainly influence the final product quality: individual elements, sequence of elements, and relation between two modalities. The case studies on simulation experiment and real-world dataset are conducted to evaluate the performance of the proposed method in terms of predictive performance and interpretability. The results demonstrate that the proposed multi-modal approach achieves higher prediction performance and improved interpretability compared to existing methods. This research contributes to advancing the understanding of the SP-MMP and facilitates the identification of root-cause of defective product production, thereby enabling quality improvement in the SP-MMP.
Abstract
The complexity and diversity of defect mechanisms challenge latent defect detection after the wafer test. Recent studies have focused on machine learning models for latent defect classification using wafer test data, but the need for high-speed, accurate classification may limit their industrial application. This study proposes a hybrid sampling method considering spatial attributes on the wafer. Under-sampling employs a one-dimensional convolutional neural network autoencoder for spatial learning, while over-sampling uses the Synthetic Minority Over-sampling Technique on inadequately learned samples. Case studies show that this hybrid approach, paired with traditional machine learning, enhances latent defect detection and reduces training time, and thus potentially aiding in actual cost-effective wafer testing.
Abstract
Maritime vessels are vital to international trade and commerce and require safe and efficient navigation. In particular, the main engine is a critical element that requires regular monitoring. Unlike aircraft and automobiles, ships operate for extended periods of time, so a main engine failure can have significant economic and safety implications. In this case study, we use the latest advances in the Internet of Things (IoT) and Explainable Artificial Intelligence (XAI) to address these challenges. Through real-time data collection from a 6800 TEU container ship in Korea equipped with a 2-stroke diesel engine, the goal of this case study is to find the independent feature importance by using Multi-Collinearity Corrected (MCC) Shapley value to remove multi-collinearity. The reason for removing multi-collinearity is that when there is correlation between variables, SHAP has a limitation in that it cannot accurately calculate feature importance. The case study unfolds in three steps. First, an isolation forest was used to categorize and label the data as normal or anomalous. Second, supervised learning models are developed and its results are interpreted by XAI to understand the importance of different features. Third, the study distinguishes between the results after performing the general SHAP method and the SHAP method after removing multi-collinearity. By utilizing the MCC Shapley value, our case study facilitates a more nuanced understanding of the variables contributing to key engine anomalies, yielding interpretations that are more reliable than conventional methods. This case study is expected to provide a significant advancement in main engine monitoring and maintenance, and significantly improve maritime safety and operational efficiency.
Abstract
In semiconductor manufacturing, the detection of latent defects is recognized as critical to both productivity and product reputation. However, the complex and diverse nature of defect mechanisms makes detection challenging, especially since these latent defects often manifest after the wafer test stage. Existing research has used complex machine learning models with many parameters and hidden layers to classify latent defects from wafer test data; however, due to the need for fast and accurate classification and the challenges of unbalanced data sets, these models have proven impractical for industrial applications. This study presents an under-sampling approach that incorporates the edge characteristics of the wafer, which differs from traditional under-sampling techniques that focus solely on data attributes by taking into account the empirical observation that chips near the edge of the wafer typically exhibit lower reliability and yield. By using a one-dimensional convolutional neural network autoencoder to capture edge characteristics, the normal-to-latent defect ratio was adjusted by under-sampling. Case studies using actual semiconductor wafer test data have shown that integrating this under-sampling approach with conventional machine learning algorithms not only improves latent defect detection capabilities, but also reduces model training times. As a result, this approach has been identified as a cost-effective alternative in machine learning models for wafer test to detect latent defects, eliminating the need for additional testing that may degrade the quality of functional chips. Furthermore, the incorporation of edge characteristics into the under-sampling has opened the possibility of extending this approach beyond semiconductor manufacturing, making it applicable across industries by targeting product-specific weaknesses.
Abstract
Serial-parallel multistage manufacturing process (SP-MMP) comprises multiple consecutive process stages, each of which has several alternative machines. The performance of alternative machines in a process stage is not identical. Hence, the quality of the final product varies depending on the sequence of machines used in production. This study presents an application of deep learning with attention mechanism to predict the quality of the final product using production log data. The proposed method extracts the sequence of the machines used in production from the production log data and uses the sequence information in predicting the quality of the final product. The performance of the proposed method is validated through a simulation experiment and a real-world case study on a semiconductor manufacturing process.
Abstract
Serial-parallel multistage manufacturing process (SP-MMP) has multiple consecutive process stages, and each stage has several alternative machines. Faulty machines negatively influence product quality; thus, identification of faulty machines is a critical task for quality enhancement. Suspicious machines, which are suspected to be faulty machines, are screened before machine diagnosis to enhance the efficiency of faulty machine identification. This study proposes a method to screen suspicious machines using production log data in SP-MMP with nominal quality features. The proposed method estimates whether a machine incurs low-quality products by itself or by its interaction with other machines. The performance of the proposed method is validated through case studies on a simulated SP-MMP and a real semiconductor manufacturing process.
Abstract
A wafer test performed in the back-end semiconductor production evaluates the quality, reliability, and computational speed of chips. Advanced approaches to detect abnormal wafer test equipment are fundamental for equipment maintenance purposes in this context. However, existing studies showed low detectability of abnormal equipment since the approaches mainly focused on indirect information such as wafer yields or probe cards in wafer test equipment. This study proposes an approach for detecting abnormalities by analyzing event log data that records the detailed operation of the wafer test equipment. The proposed approach helps selects critical events using feature selection and detect unusual event patterns from event log data in each wafer test equipment. The proposed approach is validated by applying it to newly collected event log data from actual wafer test equipment.
Abstract
Smart lighting automatically adjusts brightness and color according to weather, the user's activity, mood, etc. This study performs a user experience (UX) evaluation of HDC lighting, smart lighting. Several studies have shown that the lighting of specific brightness and color affects indoor stability and work efficiency. However, research to improve UX, which affects user-friendliness and market competitiveness, is insufficient. UX depends not only upon the function and performance of the product but also on the situational and environmental context when using the product. Therefore, it is necessary to evaluate UX that occurs in the actual use of smart lighting, and a living lab is drawing attention as a way to do this. A living lab is a virtual or physical space where various stakeholders participate to develop, verify, and evaluate products in a real-life context. In this study, a real-life context of using HDC lighting was established in the Smart Safety Living Lab, and subjects were allowed to interact with HDC lighting freely. As a result of UX evaluation, it was confirmed that UX was overall excellent and that the subjects valued the various lighting environments of HDC lighting. In addition, improvements such as the need for additional functions to improve the quality of life and the difficulty of operation due to the variety of functions compared to general lighting were also derived. This study also showed the possibility of deriving abundant improvement ideas through UX evaluation in a living lab. This study is expected to contribute to developing HDC lighting as high-quality smart lighting by reflecting the needs and UX derived in a real-life context.
Abstract
In a serial-parallel multistage manufacturing process (SP-MMP), each stage has several alternative machines among which one machine is assigned to an individual product. In SP-MMPs where products are produced in batch units, the products in the same batch tend to move collectively through the same process path (i.e., the same sets of machines in several stages). This property makes it difficult to diagnose faulty machines that result in a high defective rate. This study develops a method to derive diagnostic priorities for a sets of machines considering the collective movement of products. The proposed method is applied to the semiconductor manufacturing process to demonstrate its effective.
Abstract
Serial-Parallel multistage manufacturing process consists of multiple process stages, each of which has several alternative machines. Performance of machines in a process stage is not identical and faulty machine tends to produce more defective products. In order to reduce efforts for diagnosing faulty machines, it is desirable to first find machines that are suspected of being faulty, called suspicious machines. This study proposes a method to selects suspicious machines using production log data, which record a sequence of operating machines for each product. The proposed method is illustrated using a case study on a ring-shaped pattern of defectives in semiconductor manufacturing process.
Abstract
Advanced metering infrastructure (AMI) is a system to measure electricity usage in real time. Despite the benefits of AMI, its acceptance is being delayed due to some obstacles. First, the AMI could cause privacy invasion because it collects electricity usage information that may disclose the life pattern of the household. Second, consumers who regularly use a small amount of electricity would not need AMI. This study examines the effects of information privacy concerns and electricity usage habits on the acceptance of AMI using the structural equation modeling technique. The results would be useful for electric power companies to establish effective strategies for the AMI penetration in households.
Abstract
Advanced metering infrastructure (AMI) is an infrastructure to measure electricity usage automatically and remotely. The collected electricity usage data can be utilized in consumer load profiling, real-time monitoring, and electricity reduction. Although a wide dissemination of AMIs has enormous benefits, the global penetration rate is still only 14% in average in 2019. Many studies have explored the factors on AMI acceptance, but most of them only focus on factors from a technology or AMI supplier perspective. Hence, it is difficult to understand why residents hesitate accepting AMIs. To resolve the limitation, this study explores two major factors from a resident perspective; namely, information privacy concerns (IPCs) and perceived electricity usage habits (PEUHs). IPCs are the concerns that AMIs collect extensive amounts of data and thus cause privacy invasion. Electricity usage data collected by AMIs can disclose detailed information about the activities of a particular household. This could make residents feel invasions of privacy. PEUHs are the perceptions of residents that they have bad electricity usage habits, that is, they use a lot of electricity with irregular patterns. This PEUHs may have an impact on the intention of AMI acceptance. Rationally, residents havings severe PEUHs would have a high need of AMI. This study identifies the detailed dimensions and items of IPCs and PEUHs based on literature review. The results can provide information on the critical dimensions affecting the AMI acceptance, and further can be used as a basis for developing scales to estimate the degree of AMI acceptance.
Abstract
Serial-Parallel multistage manufacturing process (SP-MMP) consists of multiple consecutive process stages, each of which has several alternative machines. Although the functions of the machines in a stage is identical, they are not the same in actual status. A faulty machine has a direct and negative impact on the quality of products. Diagnosis of the machines in an SP-MMP is required to detect the faulty machines, but diagnosis is a cost- and labor-intensive task. Thus, suspicious machines, which are suspected to be faulty machines, are first selected, and thenn they are diagnosed in a production line. In order to selects suspicious machines, various studies employed production log data, which record the sequence of operating machines throughout the entire process stages for each product. This study provides a literature review on suspicious machine selection methods using production log data with a focus on semiconductor industry. The reviewed articles are classified into three groups by two dimensions, namely, type of quality feature and relationship analysis results. Based on the review, the status of current research, limitations, and future research directions are suggested.
Abstract
Advanced metering infrastructure (AMI) is an integrated system of smart meters, communications networks, and data management systems. The major function of AMI is to measure electricity usage automatically and remotely. The collected electricity usage data can be utilized in consumer load profiling, and then inducing consumers to reduce load during peak time. Although deploying AMI has enormous benefits, several obstacles hindering the adoption of AMI exist. First, information privacy concerns (IPC) are one of the serious obstacles. The electricity usage data can disclose detailed information about the behavior and activities of a particular household. This disclosure could make consumers feel surveilled and invasions of privacy. Second, the benefits of adopting AMI is not clear yet from the consumer perspective. Electric power companies have emphasized that consumers who use a large amount of electricity with irregular usage patterns are potential customers to enjoy benefits of AMI. However, the arguments have not been investigated yet from the consumer perspective. This study examines the effect of IPC and perceived usefulness on the acceptance of AMI. The technology acceptance model with structural equation modeling technique is employed. The results of this study are expected to provide insights to how and why IPC and perceived usefulness influence the adoption of AMI, and help electric power companies establish effective strategies for the AMI penetration in households.
Abstract
Advanced metering infrastructure (AMI) is an integrated system of smart meters, communication networks, and data management systems. The AMI allows the automatic and remote measurement and monitoring of energy consumption. It also provides important information for the management of peak demand and energy consumption and costs. Pohang University of Science Technology (POSTECH) has developed its own AMI and an IT platform called Open Innovation Big Data Center (OIBC) to store and share various data collected in the campus. In this work, we describe the AMI and the OIBC platform equipped with various sensors and systems for measuring, storing, calling, and monitoring data. Data are collected from seven buildings with different characteristics. We installed 266 sensors at the buildings, including 188 EnerTalk and Biz, 18 plugin, and 60 high-sampling sensors. The sensors collect electricity consumption data in real time, and users can visualize and download the data through the OIBC platform. In this work, we present analysis results of the collected data. The results show that the amounts of electricity consumed by campus buildings are different depending on various factors, including building size, occupant type and their behaviors, and building use. We also compare the amounts of electricity consumed before and after the COVID-19 outbreak. The information extracted can be used to improve the satisfaction of students and faculty as well as the efficiency of electricity management.
Abstract
Smart Safety Living Lab (SSLL) is a Living Lab facility, constructed and operated by KITECH in Korea, to support the evaluation, improvement, and certification of smart safety products and services. User experience (UX) is crucial to success of products and services in the market, and this is an important aspect of evaluation in SSLL. This study develops a framework for the UX evaluation in SSPS. The framework consists of a structured process for UX evaluation and also provides a guideline for conducting each step of the process. The usefulness of the proposed framework is shown via case studies.
Abstract
Wafer bin map (WBM) represents a probe test result of a wafer and includes the locational information of defective dies on the wafer. WBM pattern is a pattern of defective dies and provides crucial information for detecting a root cause of failure in the semiconductor manufacturing process. WBM pattern classification methods are widely investigated for this reason, however, the taxonomy used in previous studies is different and research on the systematic taxonomy is limited. This research aims to develop a WBM pattern taxonomy consists of geometric dimensions, namely, the size, shape, and location of the defect patterns. WBM data from a real semiconductor manufacturing process are used for validation, and the experimental results show the suitability of the proposed taxonomy.
Abstract
When households move into empty units in a collective residential building, appliance noise discomfort (ND) between neighbors and housing preference (HP) are important considerations. This study proposes a model to assign households with the goal of minimizing ND and unsatisfied HP, and demonstrates the application of the model through a case study at a campus apartment building. ND levels are calculated by an approach that utilizes electricity usage data and identifies time difference in appliance usage between neighbors. Items of HP are extracted from interviews with residents living in the building. This study discusses assignment results under various scenarios through sensitivity analysis. The proposed model can help to assign households considering ND and HP in a collective building in practice.
Advanced metering infrastructure (AMI) refers to an integrated system of smart meters, communication networks, and data management systems. Pohang University of Science Technology (POSTECH) has developed its own AMI for electricity usage data from campus buildings. POSTECH has also developed an IT platform, called Open Innovation Bigdata Center (OIBC), to store and share various data made in the campus. In this work, we describe the AMI and the OIBC platform that include various sensors and systems for measuring, storing, calling, and monitoring data. The data are collected from seven buildings that have different characteristics. We also present some applications of the collected data. The applications show that the amounts of electricity usage of the seven campus buildings are different depending on various factors, including the building's size, uses, and occupant type and their behaviors. The information extracted from the applications can be used to improve the satisfaction of students and faculty as well as the efficiency in electricity management.
In semiconductor manufacturing, a wafer bin map (WBM) provides information of bin values for dies based on electric test results (e.g., value 1 for defective dies and value 0 for normal dies). Empirically, the test result of adjacent dies tends to have a similar value, forming spatial defect patterns. These patterns on WBM can be classified as specific types of patterns (i.e., ring, line, zone, or etc.) and contain useful information that helps to identify the root causes during the fabrication process. In this study, machines are regarded as the only source for the root causes. Therefore, each type of the spatial patterns on WBM occurs due to assignable causes, which come from machines on specific process stages. Moreover, a spatial defect pattern on WBM frequently occurs with the same suspicious machines. Therefore, it is important to identify the root causes of the spatial patterns on WBM in order to yield improvement. Despite of its importance, engineers have relied on their intuition and knowledge to identify the root causes of the spatial patterns on WBM. In this paper, we present a heuristic method for identifying the root causes of the patterns on WBM. During semiconductor manufacturing process, each wafer is processed by different machines at each process stage and this information is recorded. Thus, there will be various patterns of assigned machines at different process stages in the process records of wafers. The various patterns are called sequence patterns in this study. The proposed method derives suspicious sequence patterns, causing each systematic patterns on WBM, based on frequency and influence of sequence pattern. The output of the proposed method is expected to support the root cause analysis by engineers.
Pohang University of Science and Technology (POSTECH) built an Open Innovation Bigdata Center (OIBC) in 2017. Energy usage data has been collected and stored at the OIBC in real time from on-campus seven buildings in four areas; living, research, production, and educational area. An apartment building is one of the buildings from which energy usage data is collected. This study applied the energy usage data of each household in the apartment building to infer occupants’ discomfort. Occupants who have different lifestyles patterns but live near each other may feel discomfort. Energy usage pattern is highly associated with lifestyle pattern. Occupants do activities with using electronic devices at a specific time and it is quite routine. Based on the background, we developed a scoring method to calculate the discomfort between households based on the energy usage patterns. We firstly clustered households that similarly use energy and then assigned lifestyle features to each cluster. Through this process, each household has their own lifestyle and the discomfort is calculated between them based on the developed scoring method. The result can be useful to operate the apartment building with the following advantages. First, it is useful to understand or predict that which occupants feel much or less discomfort without additional efforts such as surveys and interviews. Second, it is also helpful for new occupants who move into the apartment by recommending the best location that can minimize their discomfort based on simple lifestyle information. Finally, it can be used for the additional campus services such as an incentive service and energy management service.
Wafer bin map (WBM) represents the probe test result on dies of wafer using binary values, pass or fail. Defective dies often form a spatial pattern and it provides crucial information to identify the assignable cause of process variations. Therefore, it is important to identify and classify spatial defect patterns on WBMs for yield enhancement in semiconductor fabrication process. Mixed-type defect patterns occur when multiple defect patterns occur in a single wafer simultaneously. As semiconductor fabrication process becomes more complicated, the occurrence of mixed-type defect patterns is more frequent. This research intends to develop a mixed-type defect patterns classification framework via neural-network approach. The proposed framework consists of two phases, extraction of spatial patterns and classification of mixed-type defect patterns. For this purpose, convolutional neural networks and object detection algorithm are adopted. Data from real semiconductor fabrication were used for validation of the proposed framework. The experimental results show the effectiveness of the proposed framework in the classification of mixed-type defect patterns as well as single defect patterns.
Providing a driver road information at the appropriate time is important to reduce accidents. A flexible organic light emitting device chevron alignment sign (FOLED-CAS) is a smart safety device that consists of a solar battery and FOLEDs. It is charged in daytime and turns on FOLEDs at night to give drivers advance warning of a curve. The sign directly interacts with a driver and could reduce driving fatigue and cognitive load. Therefore, it is important to evaluate the sign has a positive effect for safe driving and delivers satisfied experiences for a driver. This study conducted an experiment to evaluate user experience (UX) of FOLED-CAS in various driving contexts. The experiment utilized a virtual reality (VR) model to overcome the limit of space, time, risk and cost. The model has 12 scenarios depending on weather, type of car, and size and type of CAS. Each of 24 subjects virtually drove in all scenarios of the VR model implemented by a head-mounted display and a driving simulator. While driving in each scenario, a driving simulator collected driving behavior data such as a number of rapid deceleration and sharp turn. UX survey data such as usability and affect were collected after each scenario. Analysis of the collected data provided that the UXs are different depending on the size of FOLED-CAS. Additionally, most subjects experienced difficulty in recognizing the sign at foggy night and argued that the sign needs to be more vivid. These results can be applied to redesign FOLED-CASs and contribute to improving driving experience and safety.
A multi-stage manufacturing process (MMP) consists of a series of process stages that are designed to conduct specific tasks in order to produce products. As the production process undergoes, a history of operated machines at each process stage of a product is referred as a processing record. To increase efficiency and capability of production, a modern MMP operates multiple machines in a process stage. Since the machines at each process stage can have differences among their performances in practice, these differences result in variations to the desired value of product quality. To ensure product quality reliably, it is important to capture where the variations occur in the middle of MMP. However, in the MMP, a result of the former process stage affects to the results of the later process stages. Consequently, a compound effect by machines at various process stages results in a variation of product quality. This study proposes an approach to discover suspicious combinations of machines at different process stages, so called machine sequence pattern (MSP), causing significant variations to product quality. The proposed approach extracts the suspicious MSPs by evaluating two aspects of the MSP: frequency of appearance and influence on product quality. Frequency of appearance filters out the MSP that appears by chance on the processing records. Influence on product quality determines whether a MSP has a significant effect on the variation of product quality. A hypothetical dataset of processing records on MMP is generated to evaluate the performance of the proposed approach. The simulated experiment shows the viability and effectiveness of the proposed approach in discovering the MSP causing the variations.
Wafer bin maps (WBMs) in probe tests represent spatial information of defective dies on wafer and spatial patterns of defective dies provide critical clues to extract defect causes in complex semiconductor manufacturing process. Classification of defective dies spatial pattern into pre-defined or new spatial pattern suggest which defect causes occur in the wafer fabrication process. With the aid of rapid advance of classification methods, there exists various approaches to detect and classify spatial pattern of defective dies. However, existing studies have rarely provided comprehensive review of WBMs spatial pattern detection and classification methods. This paper reviews recent 43 studies about detecting and classifying WBMs spatial patterns. There are two major review points: 1) classification methods (e.g. probability model-based, feature extraction-based, or etc.), 2) classification criteria (supervised/unsupervised). This paper also suggests future research issues in WBM spatial patterns detection and classification. This study will provide a recent research trend of WBMS spatial pattern detection and classification.
Wearable smart safety airbag is a smart safety product, which has a sensor that automatically detects the falling of workers from a particular height and operates an inflator to protect the worker's upper body from the impact. The objective of this study is to evaluate the user experience (UX) of a wearable smart safety airbag. We conducted an experiment using 24 subjects who have experience in construction works to evaluate the airbag in UX perspective. The data on subjective UX evaluation as well as body motion change were collected. The analysis result shows that the subjects rated high in dimensions relevant to a safety such as reliability and confidence, but rated low in cost affordability. Additionally, the result shows that the wearable safety airbag provides a better experience in overall dimensions for users than a normal safety vest does. This study contributes to developing a user-friendly airbag and increasing the competitiveness in the market.
Many of manufacturing processes consist of a lot of consecutive process stage and each process stage has several pieces of equipment in order to operate assigned work. However, even if those several facilities had same work performance, the product quality varies in accordance with manufacturing process path in real manufacturing environment. Existing studies about manufacturing process path mainly focus on enhancing process productivity so that it has a lack of process quality perspective. Therefore, this study investigates manufacturing process path in terms of enhancing process quality on multi-stage manufacturing process. The main objective of this study is to find ‘Golden’ process paths which is expected to show higher product quality than predefined quality criterion. The proposed method uses process path information (i.e., process log) and facility performance information in order to find ‘Golden’ process paths. The data used in this study is generated with some hypotheses on simulated environment and it is used for verifying performance of the proposed method. The derived ‘Golden’ process paths are expected to contribute toward process quality enhancement study, such as finding efficient equipment combinations or making a dispatching plan
Managing the yield of wafer is one of the most important tasks to the semiconductor manufacturers. A lot of efforts for enhancing the yield of wafer have been conducted in both industries and academia. Thanks to the advance of IoT and data analytics techniques, huge amount of process operational data, such as indices of process parameters, equipment condition data, or historical data of manufacturing process, is collected and analyzed in real-time. Though the amount and availability of process operational data have been increased, existing yield management approaches on semiconductor manufacturing process have only considered a single process or few processes among the overall processes. This study proposes a way to find process routes which maximize the yield of wafer (i.e., golden process routes) in view of multiple process steps. This work is expected to complement the existing efforts for managing the yield of wafer by adding results of process-oriented analysis.
Thanks to the advancement of IT and sensing technologies, collection of real-time energy usage data has become possible. Huge amounts of energy usage data are being collected in various fields. This study reviews existing studies on the application of energy usage data. The existing studies are classified based on three attributes; namely, data (e.g., energy consumption, occupant behavior and environmental data), information (e.g., energy reduction rate, energy usage pattern and predicted energy consumption) and objective (e.g., energy conservation, energy monitoring, energy operation and energy prediction). This study also examines the frequency of application cases by objective since 2000. The result of this study would help researchers understand the current status usage data applications and plan for future research.
Profitability through maintenance is becoming important because business in the maritime industry is deteriorating. In this context, real-time monitoring of the condition of a vessel’s main engine is an important issue in the maritime industry as any trouble with the main engine would cause serious problems against the safety and failure cost. Understanding the condition of a vessel properly and proactively is necessary because the main engine plays the most important role in a vessel. Existing studies have focused on univariate analysis or multivariate analysis with small amount of data. However, the advancement of data collection technologies has allowed the collection of various types and massive amounts of data in the maritime industry. This study presents a case study, which aims to propose a framework for data-driven condition monitoring of a vessel main engine. This study supports the implementation of the condition-based maintenance of the main engine and provides the basis for proactive management of the vessel’s main engine in the future.
Mobile health (mHealth) services support continuous health-related monitoring, feedback, and behavior modification of individuals and populations through personal wireless communication devices. However, a high number of users have ceased using existing mHealth services. Poor service quality is a major reason for the high rate of withdrawal from such services. Therefore, the quality of mHealth services must be improved to enhance users’ intention to continue the use of such services. Effective quality improvement for continuance intention can be achieved by enhancing the quality components which significantly influence that intention. However, few studies identify the quality components which are critical for continuance intention. The present case study aims to identify the quality dimensions (i.e., components) that are crucial for users’ continuance intention of a certain mHealth service. Onecare, the mHealth service in this research, provides various forms of support for daily health behavior monitoring and improvement of college students by utilizing daily behavior data (such as daily sleep time, daily diet records, and walking steps collected through smartphones and activity trackers). In this research, five major quality dimensions of mHealth services were derived from existing studies: content quality, engagement, reliability, usability, and security. Total effect of each quality dimension on user’s continuance intention was estimated by applying partial least squares structural equation modeling with the survey responses of 191 Korean college students who used Onecare for over three weeks. Estimation results suggest that engagement has the most significant total effect on continuance intention, followed by content quality and reliability. Conversely, the total effects of usability and security on continuance intention were found to be statistically insignificant. This research would serve as a basis for mHealth service managers in planning quality improvement to maximize the corresponding impact on continuance intention.
Various types and massive amounts of data are being collected in various industries with the rapid advancement of data collection technologies. Such a big data proliferation has provided new service opportunities. For example, heavy equipment manufacturers monitor, diagnose, and predict product health through prognostics and health management services using the data collected from heavy equipment. Consequently, equipment managers can cope with potential product breakdowns and maximize product availability for clients. System informatics-based services (SISs) refer to a new class of services, where the main contents and values are created based on the analysis of the data collected from the system in question. The emergence of SIS cases can be observed in diverse industries. In this talk, we will first review a few recent research projects for developing new SISs in automobile, marine transportation, and healthcare industries. A typical SIS process undergoes three phases: data acquisition, data analytics, and service provision. We will discuss several major research issues associated with the main phases of the SIS process, including which data to collect, how to collect and manage them, how to analyze them, which information to extract, and how to utilize the information in designing and developing new services. This study is expected to contribute to understanding and realizing new service opportunities in this data-rich information economy.
Recent innovations in IT, such as IoT or Big data analytics, have enabled one to collect and analyze huge amount of process operational data on manufacturing processes. In line with technological advances, the current research trend on semiconductor manufacturing process is also changing rapidly. Research areas on process optimization and process monitoring and diagnosis were regarded as independent in the past. However, as the operational data from ongoing processes become available, the boundary between the two research areas gets vague. In this paper, we investigate the recent trend of the two research areas, with a focus on their integration, in the semiconductor manufacturing industry. This paper reviews recent improvements and activities for process optimization, monitoring, and diagnosis in semiconductor manufacturing process. More than 60 papers were reviewed and classified based on three dimensions: 1) type of research (e.g., process description, process prediction, and process prescription), 2) objective of research (e.g., improve quality or productivity), and 3) methods used to achieve the objective (e.g., Design of Experiments, Mathematical Modeling, Data Mining, Simulation Modeling). The result of this study will help to understanding a current trend on process improvement in semiconductor manufacturing. This work will lay a foundation to find out a research topic, which is associated with both process optimization and process monitoring and diagnosis simultaneously.
Main engine failures in ship operations can lead to a major damage in terms of the vessel itself and the failure cost. In this respect, condition monitoring of a vessel’s main engine is crucial in ensuring the vessel's performance and reducing the maintenance cost. The collection of a huge amount of vessel operational data in the maritime industry has never been easier with the advent of advanced data collection technologies. Real-time monitoring of the condition of a vessel’s main engine has a potential to create significant value in maritime industry. This study presents a case study, which aims to propose a framework for condition monitoring of a vessel main engine. The case study uses sample data of an ocean-going vessel operated by a major marine services company in Korea, collected in the period of 2015-2016. This study first identifies various main engine-related variables that are considered to affect the condition of the main engine, and then to detects abnormalities and their patterns via multivariate control charts. This study is expected to help to enhance the vessel’s availability and provide a basis for a condition-based maintenance that can support proactive management of vessel’s main engine in the future.
Mobile health (mHealth) services supporting continuous health monitoring, feedback, and behavior modification of individuals and populations by using personal wireless communication devices have emerged. A promising type of mHealth services supports health behavior monitoring for college students. However, such services appear to have poor quality problems and need quality improvement. This research answers two research questions that should be considered for quality improvement in the context of mHealth behavior monitoring services for college students. The first research question examines the effects of gender and age on customer evaluation about each quality attribute. The second question looks at how each quality attribute influences customer satisfaction. To answer the two questions, this research conducted a survey of 191 Korean college students. When ANOVA was applied on the survey responses, gender and age were found to be significant factors affecting customer evaluation of quality attributes. A Kano analysis of the survey responses suggested the effect each quality attribute has on customer satisfaction. Results of the analyses are expected to enhance the understanding of quality attributes in mHealth behavior monitoring services for college students, and hence contribute to the quality improvement of such services.
Various types and massive amounts of data are being collected these days in multiple industries with the rapid advancement of data collection technologies. Such a big data proliferation has provided new service opportunities. For example, heavy equipment manufacturers monitor, diagnose, and predict product health through prognostics and health management services using the data collected from heavy equipment. Consequently, equipment managers can cope with potential product breakdowns and maximize product availability for clients.
The goal of this research is to identify service opportunities for enhancing driving safety for commercial vehicles (including intra-city buses, express buses, and trucks). Based on an analysis of vehicle operational data in conjunction with accident data, new service opportunities for enhancing driving safety are identified. The service opportunities would contribute to developing new services for commercial vehicle companies and related authorities in Korea.
The Online-To-Offline(O2O) service is to find and attract users online and direct them to offline stores. Examples include Uber, Zipcar, Groupon, and so on. Although the term, O2O, is frequently used in academia and industries, researches of methods for systematically developing the O2O service are not that mature. In this research, the objective is to develop an O2O service blueprint that shows overall flow and components of the O2O service. Using this blueprint, O2O service providers can obtain a systematic understanding by visualizing their services on the customer perspective.
Various types and massive amounts of data are being collected through physical and social sensing. In many cases of data use, the results and value of data analytics are conveyed to specific beneficiaries (e.g., individuals and organizations) within a service system, such as a transportation, energy supply, or healthcare service system. Thus, data-use can be directed and improved based on considerations of the relevant service system. In this paper, we suggest that effective use of data analytics can be guided by the question, “How does data analytics contribute to the creation of a smarter service system?” To facilitate answers to this question, we define a smart service system from a data application perspective, and propose a specific approach, serviceoriented data analytics, based on eight case studies related to smart service systems. We introduce an ongoing case study to demonstrate the applicability and utility of our proposals.
This paper proposes a data-driven methodology to design new service concepts for vehicle operations management (VOM). VOM service refers to a group of services that help drivers drive safely, conveniently, and pleasurably. The proposed methodology aids service designers with the design of VOM service concepts starting from VOM-related data. Case studies on buses are presented to demonstrate the feasibility and effectiveness of the methodology. The proposed methodology is expected to facilitate VOM service design process and serve as a basis for data-driven service innovation.
The proliferation of customer data provides numerous service opportunities to create customer value with data. A data-driven approach for customer value creation in services is necessary to facilitate identification and realization of such opportunities. This paper proposes such an approach, data-driven customer process management (CPM), that supports customer value creation with data from customer processes. CPM is an approach for monitoring, measuring, and improving a specific customer process (e.g., driving and exercising) based on data from the process (e.g., driving speed and exercise time). The measurement of certain aspects of the process (e.g., safety of driving and performance of exercise) helps customers improve and manage their processes. This paper defines CPM and proposes the framework to perform it, integrating insights within the literature related to big data, customer value creation, and process improvement, as well as our own empirical studies on designing services with large databases of customer data. Under the CPM framework, customer processes can be improved and managed with data, similar to manufacturing and business processes. We expect this paper to foster use of CPM in various areas, stimulating use of customer data in enhancing customer value creation.
Various types and massive amounts of data are collected in multiple industries. The proliferation of data provides numerous opportunities to improve existing services and develop new ones. Although data utilization contributes to advancing service, studies on the design of new service concepts using data are rare. The present study proposes a data-driven approach to designing new service concepts. The proposed approach is aimed at helping service designers to understand customer behaviors and contexts through data analysis and then generate new service concepts efficiently on the basis of such understanding. A case using bus driving data is introduced to illustrate the process of the proposed approach. The proposed approach provides a basis for the systematic design of new service concepts by enabling efficient data analysis. It also holds the potential to create a synergetic effect if incorporated into existing approaches to designing new service concepts.
Various health behavior data such as walking steps and time asleep can be collected from individual daily through smart devices. The availability of such data promotes health behavior support services that provide users with information for health behavior management by utilizing health behavior data of the users. College students would be a target population of health behavior support services due to their negative health behaviors. This research identified which health behaviors are critical and thus should be managed by health behavior support services for college students. This research collected a data set from 47 Korean college students during a four-week experiment. The data set included 14 variables of four dimensions. Three dimensions included 10 variables that measure daily activity, sleep, and diet of the students through smart devices. The other dimension included activity, sleep, diet, and overall scores that the students evaluated daily based on how healthy the students viewed their own daily activity, sleep, and diet. By analyzing the data set, statistically significant variables on activity, sleep, diet, and overall scores were identified as critical health behaviors in health behavior support services for college students. This result would provide clues as to the improvement or development of health behavior support services for college students.
The study proposes a conceptual framework of new service development (NSD) for system informatics-based services (SIS), called SI-NSD. System informatics-based services (SIS) refer to a new class of services, where the main contents and values are created based on the analysis of the data collected from the system in question. SI-NSD aims to enhance the effectiveness as well as the efficiency of the new SIS development process. In this talk, we present a conceptual framework of SI-NSD, which includes the essential phases of SI-NSD and their linkage. Real cases in automobile and healthcare industries will also be discussed.
This talk proposes a data-driven approach to designing service concepts for vehicle operations management (VOM). The proposed approach first collects VOM-related data through various sensors installed on vehicles, analyzes the data to extract insights regarding vehicle operations, and then designs service concepts to support the operation of vehicles. This talk also presents case studies on passenger and commercial vehicles.
A key component of servitization in manufacturing industries is informatics, which transforms product and customer data into information for customers. Informatics-based service is defined as a type of service wherein informatics is crucial to customer value creation. In this talk, we introduce two case studies on the design of informatics-based services in manufacturing industries. Various aspects of informatics-based service design in manufacturing are also discussed.
Using big data effectively in service design requires having a model that describes the service in question along with the data in use. In this talk, we propose a generic structural service model to describe a service with a set of predefined variables, facilitating design of services that use big data. The variables include service objective, indicators, customer and context variables, and delivery contents. We discuss the model in the context of several case studies of service design.
Purpose: Mobile location-based services (m-LBS) can be defined as a type of mobile service (m-service) that provide customized information or contents based on the location of customers and their surrounding environment. M-LBS has new characteristics compared with existing m-services, and thus existing studies on m-service quality (m-SQ) scales are limited in measuring the m-LBS quality. Hence, this study aims to design a scale for measuring the m-LBS quality.
Methodology/approach: This study utilizes a general procedure of quality scale design in the service quality literature. A comprehensive literature review was conducted to generate the preliminary dimensions and items, and qualitative research was also conducted to find the features of m-LBS quality.
Finding: The quality scale for m-LBS focused on map services were designed. The quality scale is composed of eight dimensions and 31 measurement items.
Research limitations/implications: The designed scale would help practitioners evaluate the quality of m-LBSs. However, this scale have not been statistically validated get. The reliability and validity of the scale will be tested in future studies.
Originality/value: The originality of this study is to explore the m-LBS quality and propose new quality scale for m-LBS. This study is a new addition to the m-SQ literature for m-LBS, which is expected to grow very fast in the near future.
Purpose: This study presents a new class of services, called system informatics-based service (SIS), where the main contents and values are created based on the analysis of the system data. Prognostic health management service, asset analytics service, and building energy management service are representative SIS cases. SIS cases are prevalent in practice, but studies on SIS from an academic perspective are scarce. This study proposes a conceptual model of SIS to lay a basis for future studies on SIS.
Design/Methodology/Approach: A comprehensive review of related literature was conducted. In particular, literatures on system monitoring, informatics, and service science were carefully reviewed. A large number of SIS cases were also collected and analyzed.
Findings: We developed a conceptual model of SIS, which includes the essential elements of SIS and their linkage. The characteristics and properties of SIS were also identified.
Research limitations/implications: The proposed conceptual model of SIS lays a basis for future studies on SIS. A number of follow-up studies on SIS development and operation are expected.
Information-intensive service (IIS) is a type of service in which customer value is primarily created via information or data interactions between the customer and the provider. This paper shows a case study to systematically identify the process parameters related to a specific quality dimension in an IIS case called Internet Protocol Television (IPTV). This service transmits multimedia contents, such as live TV, video, audio and data, to television in packets through the Internet. In the case study, a process related to a specific quality dimension in IPTV service is derived. The process is divided into several sub-processes. Diagrams are then drawn at system and activity-levels by process modeling. The system-level diagram presents the systems participating in the sub-processes and the data interactions between the systems. The activity-level diagram shows activities and sub-systems in each sub-process. The measurable characteristics of the activities and sub-systems, which affect the quality dimension, are defined as process parameters. This research provides insights on the systematical identification of process parameters in IIS.
Numerous manufacturing companies have “servitized” their value propositions to address product commoditization and sustainability issues. Service—essentially different from a product—contributes to the fulfillment of customers’ unmet needs and increases the freedom of finding an environmentally more benign offering beyond simply offering the product. Informatics is a key to the design of services in manufacturing companies. Informatics facilitates the collection of various types of data from products and customers and enables the production and delivery of useful information for customers. This paper (1) proposes a conceptual framework for designing informatics-based services in manufacturing industries, (2) introduces a service design case study that the authors recently conducted with a major car manufacturer in Korea, and (3) suggests future research issues. This paper is expected to contribute to product–service integration in manufacturing companies in this information economy.
This study proposes an approach to identifying new service opportunities for enhancing driving safety. Service opportunity identification involves finding the target customers of the service (to whom), the motivations for the service (why), the service contents (what), and the service delivery process (when, where, and how). The proposed approach consists of two phases. The first phase involves the analysis of driving behavior by using the operational data in conjunction with traffic accident data and drivers’ driving history data. The goal of this analysis is to extract insights for the identification of new service opportunities for driving safety enhancement. The second phase is the identification of service opportunities on the basis of the analysis from the first phase. The method of quality functions deployment and universal job map are employed to identify the service opportunities for driving safety enhancement. This study also presents the results of a case study in Korea. A sample of the operational data of intra-city buses collected in 2013 was analyzed, and four service opportunities for driving safety enhancement were identified. This research will provide a basis for the systematic development of new services by using a data-driven approach and contribute to enhancing the driving safety of buses in Korea.
A hypertension or its complications is one of the major sources to increase the national medical expenditures in Korea. We aim to evaluate the risk of hypertension onset and the risk of its complications for hypertension patients, using national healthcare databases established by Korean National Health Insurance Corporation. We apply classification techniques such as logistic regression, linear discriminant analysis and classification and regression tree to score the risk of hypertension complication onset and also compare the performance of those methods. We also consider various strategies of under-sampling and over-sampling for handling imbalanced data. These three classification methods seem to perform similarly although the logistic regression performs better than others marginally. It is also presented here how to apply this result for reducing the risk proactively through a service model. This study is meaningful in that the databases used is a representative sample for the whole nation.
The National Health Insurance Service (NHIS) of Korea has collected insurance and medical record data of nearly all the citizens since 2001. Development of a big-picture of healthcare service opportunities utilizing the NHIS databases is of national interest. Such picture would serve as a map for the healthcare policy and service development. In this talk, we discuss how we identified 138 data-driven healthcare service opportunities and present the big-picture which encompasses the opportunities.
A product-service system (PSS) is an integrated bundle of products and services which aims at creating customer value. Recently, with the advancement of ICT and analytics technologies, informatics is utilized in PSSs. Such type of PSS is called informatics-oriented PSS and has different properties compared with conventional PSSs. In this talk, we provide a review of informatics-oriented PSS. Based on the review, we clarify the concept of the PSS and discuss its characteristics, and challenges."
The National Health Insurance Service of Korea has collected health service data of nearly all the citizens since 2001. This research aims to develop a new service model for hypertension patient management using a sample of the data set. This talk focuses on the process of designing and evaluating the service production and delivery processes reflecting the characteristics of target customers. The processes for a service supporting blood pressure self-control will be presented as an example.
The Korean government developed a system to collect the operational data of commercial vehicles. This talk presents an analysis of the operational data collected in conjunction with traffic accident data. The goal of the analysis is to gain insights for the development of new service concepts for driving safety enhancement. The relationship between the driving patterns and accident history of drivers is identified and utilized in developing service concepts supporting driving safety enhancement.
In Korea, the National Health Insurance Service (NHIS) has collected insurance and medical record data of nearly all the citizens since 2001. We developed eight new healthcare service concepts which utilize the NHIS databases. In this talk, we present how the new service concepts were developed. We also discuss some challenges of developing healthcare service concepts.
The Korean government seeks to develop and provide driving safety enhancement services to the drivers of commercial vehicles such as buses, taxis and trucks. This paper proposes a data-driven approach to developing driving safety enhancement services for commercial vehicle drivers in Korea. The approach consists of two phases. The first phase is to analyze commercial vehicles’ operational and traffic accident data archived in the Korean government. An analysis of the data may reveal insights as to how commercial vehicle drivers drove and how their accidents occurred. The second phase is to develop service concepts by integrating a few service ideas which are generated based on the results of data analysis. This research would enhance the driving safety of commercial vehicles in Korea, and provide a basis for developing a data-driven approach to developing services.
The National Health Insurance Service in Korea has collected and maintained the health service data of nearly all the citizens in the national health service databases (DB) since 2001. A sample of the DB has been released for research purposes in 2013. The sample DB contains the insurance data, diagnosis history data, treatment history data, and medical examination data of 1 million people for 9 years, 2002-2010. This research aims to design a service process for hypertension patient supports with utilizing the DB. Service process design is done by referring a general new service development (NSD) process. The first phase is to establish the concepts of service process design. The opportunities for new services are identified from current hypertension patient activities and service concepts are established from various service ideas. The second phase is to design service process. Activities of service provider and customer are generated from universal job map. Those activities and relevant information flow are visualized in modified service blueprint. Detailed designs of customer interactions, information exchanges and physical evidences are conducted with public healthcare experts. The third phase is to test and refine service process via a pilot service. This research would serve as a reference case for designing healthcare service process.
The product-service system (PSS) is a system in which its integrated products and services jointly fulfill customer needs. Recent reviews on the PSS research in the past 10 years identified that concretizing knowledge and experience for PSS development is a critical and timely research topic. This research aims to provide a review of existing methods and knowhow for PSS development, and to propose an integration of existing knowledge into a PSS development framework. The current conference paper introduces an interim outcome of this research.
Digital tachograph (DTG) is a device installed on a vehicle that records its operation data. The Korean government maintains a databases (DB) which contains the operation data of commercial vehicles. We are conducting a research project for driving safety enhancement using this DTG DB. In this presentation, we discuss the analysis of the DB to extract the relationship between driving patterns (such as speed, rapid accelerations and decelerations, breaks on or off) and accident rates. The analysis results can be used in developing service models for driving safety enhancement.
We are conducting a research project for hypertension patient management using this sample DB. We first develop models to predict the onset of hypertension (Part 1), and then develop a service model for proper intervention to prevent the high risk group of people from getting hypertensive and related complications (Part 2). In this paper, we will describe the new service model development process in Part 2. The process consists of 5 steps; namely, review current related services, understand the requirements of stakeholders, develop a service concept, design the service delivery process, and refine the service model. The service model development is under progress, and we are now in the third step. Once the service concept development is completed, a detailed process for service delivery will be devised and the resulting service model will be tested in two branches of the National Health Insurance Service in Korea.
The Ministry of health and welfare in Korea maintains the national health service databases (DB). A sample of the DB has been released for research purposes. The sample DB contains the insurance data, diagnosis history data, treatment history data, and medical examination data of 1 million people for 9 years, 2002-2010.
An experience-centric service (EXS) is a type of service in which customers experience emotionally appealing events and activities resulting in their distinctive memory. Examples include amusement park, entertainment, counseling, party, and leisure services. This talk proposes a structured tool for visualizing customer experience process in EXSs, called Customer Experience Board. Using this tool, users can obtain a systematic understanding of the value-creation mechanism of an EXS.
Product-Service System (PSS) is a type of business offering for fulfilling customers’ needs through the integration of products and services. In developing PSS, it is important to evaluate PSS quality in customers’ perspectives for reflecting customers’ needs on PSS. Although there are many studies about PSS evaluation, they are not enough to evaluate PSS quality in customers’ perspective because they have been focused on providers’ perspective. In this talk, we present a scheme for evaluating PSS quality in customers’ perspective.
Serivice-scape is physical surroundings that affect service activities of customers and employees. Recently, a virtual reality (VR)-based testing laboratory has been introduced. The laboratory is useful in service-scape evaluation because it alleviates the time and cost constraints in services-scape modeling. This talk presents a case study on the interior evaluation of a duty free shop using VR models.
The digital tachograph (DTG) is a device to collect the operational data of an automobile. The driving patterns of a driver can be extracted by analyzing the DTG databases. This talk presents an analysis of the driving patterns extracted from the DTG databases and their impacts on the actual accident rates in Korea. This research would serve as a basis for developing the driving safety enhancement service concept.
Service testing laboratory is a VR-based facility which supports testing of services using the VR-modelling technique. The service testing in the laboratory unavoidably faces the gaps resulting from the difference between the real service environment and laboratory environment. As the gaps influence the quality of service testing, the gaps should be identified and addressed properly. This talk presents a model to understand how the gaps are formed and how they affect the service testing.
A mobile content service (MCS) is a service in which customers participate with their mobile devices in order to use, play or read some content that is provided by the service provider as an outcome and delivered to the customer’s mobile device. Examples of mobile content services include application markets, music/video streaming services, e-book services and information providing services. To improve the service quality of MCS, the service providers should understand what constitute the quality of MCS and how to measure the quality accurately. However, the works related to quality of MCS to date are not comprehensive enough to provide a guideline for understanding and evaluating the quality of MCSs. The objective of this research is to develop a quality scale applicable to evaluating two kinds of MCS: application market services and cultural content services. This research is expected to have a contribution in providing a useful scale, new quality dimensions which have never been considered before, which contains to be used in understanding and measuring quality of MCSs.
Not Available
The product-service system (PSS) is a business system in which its integrated products and services jointly fulfill customer needs. This research proposes an evaluation scheme for PSS models. The PSS model evaluation scheme consists of evaluation criteria and methods. The current paper mainly focuses on the introduction of the evaluation criteria and their application. The set of evaluation criteria has a four-layered hierarchical structure which has 2 perspectives, 5 dimensions, 21 categories, and 94 items in total. They are designed to consider the provider and customer perspectives, and all 3P (profitability, planet, and people) dimensions. They cover various stages of a PSS lifecycle, namely, design, production, sales (or purchase), usage as well as disposal. To illustrate the usefulness of the proposed evaluation scheme, a few PSS cases are first modeled using an existing PSS visualization tool, and then evaluated using the scheme. Case studies show the proposed evaluation scheme is workable to assess the potential value of the PSS models in question; it provides an extensive knowledge base for PSS evaluation, thereby serves as an efficient and effective aid to practitioners for successful PSS development.
As the notion of User eXperience (UX) plays an important role in enhancing the competitiveness of products, UX evaluation has become an important issue in new product development. This talk presents a repository of UX components, called a UX component model, which is used for evaluating the UX of products. An application of the model is also presented using a case study on smart phones.
A combination of smartphone hardware and its related services is a representative case of product service systems (PSS). In addition to the hardware, the services, which provide users with various contents by web or applications, are a major source of customer satisfaction. Hence, when evaluating quality of smartphone, the quality of its related services should be considered. In this talk, we present a scheme for evaluating the quality of smartphone PSS.
A contents-oriented service is a type of service in which service contents have major roles in service provision. Examples include car-infotainment services and internet-based education services. This talk presents issues regarding the visualization of contents-oriented services. The conventional service blueprint and other existing tools for service visualization are reviewed for the visualization of contents-oriented services; the limitations and alternatives are also discussed.
Recently, a virtual reality (VR)-based laboratory for service testing has been constructed in Korea. An important issue associated with its operation is how to evaluate the quality of a service tested in a laboratory environment. This talk presents a service quality evaluation scheme in a service laboratory environment. The evaluation scheme consists of 2 parts: a repository of service evaluation criteria and a guideline to selects pertinent criteria for a particular case study. The result of a case study will also be presented.
Advanced IT technologies enable automobile companies to provide various services based on the trip data collected from their customers’ vehicles. In this talk, we present a data-driven approach to developing vehicle-related service concepts. The approach first identifies service opportunities by understanding how customers drive and what customers need by analyzing the data. Some of the identified opportunities are then elaborated to form service concepts. The approch is demonstrated through a case study on the automobile telematics service.
The product-service system (PSS) is a system in which its integrated products and services jointly fulfill customer needs. Visualizing PSS, a complex yet invisible system, helps people understand and analyze it. This talk introduces an efficient yet simple tool, called the PSS Board, to visualize the PSS process and presents its utilization issues from a PSS operations management perspective. The PSS Board is a matrix board where the customer activities, state of the products, services, dedicated infrastructures, and partners are placed in rows, and the general PSS process steps are placed in columns. The visualized PSS on the board shows how the PSS provider and its partners aid customers’ job execution process. As the Service Blueprint, a tool for service process visualization, has been utilized for various types of service operations management research, the PSS Board could prove to be essential in PSS evaluation, improvement, design, and delivery.
An innovative industry is organized into an ecosystem in which various enterprises and their products and service participate based on a platform. Many of those enterprises attempt to lead their business ecosystem, and gain the advantages thereof, by providing the ecosystem platform. Prior research on platform leadership has used intuitive approaches to determine the plat-form conditions under which the platform leader can succeed in the market. In the present study by contrast, we established strat-egies for attaining platform leadership that consider not only the platform but also its structural influence on the whole ecosystem. Since the relation-based model represents a platform and its rela-tions with other products, services, and participants more suitably than other representation models, we applied it to the analysis of an ecosystem and used it to establish platform strategies accord-ing to five key aspects. Here, we demonstrate our approach by means of a case study of a mobile ERP solution industry in Ko-rea. We also present, as derived from the analysis results, three platform strategy alternatives that enable an ecosystem to grow sustainably while providing sufficient profits to all participants and strengthening the platform leadership of the focal company.
Product-Service System (PSS) is a novel type of business model integrating products and services so that they are jointly satisfying customer needs. PSSs can add economic, environmental, and social values for diverse stakeholders. This research proposes a PSS blueprint, which is a representation scheme to visualize a PSS model with a focus on its value creation mechanism. Existing PSS cases are represented and analyzed using the proposed PSS blueprint.
‘s-Scape’ (service-Scape) is a virtual reality (VR)-based system for service testing, which is now under development. One important issue associated with its operation is how to evaluate the quality of a service tested in s-Scape. Measuring service quality in s-Scape is particularly challenging in the sense that the measurement is to be done in a VR-based laboratory environment. This talk presents the current status and future research issues for a service quality evaluation scheme for s-Scape.
Product-Service System (PSS) is a business model for fulfilling customers’ needs through the integration of products and services. The objective of this paper is to review the existing literature on PSS evaluation and then to identify main research issues in this field. In this research, 39 existing papers are collected (11 papers for PSS evaluation and 27 papers for product/service evaluation), and they are classified based on two criteria, namely, evaluation perspective and research type. The evaluation perspective consists of provider perspective and customer perspective. The provider perspective is classified into three sub-categories (i.e. economic, environmental, and social value). The research type is also classified into two sub-categories (i.e. methodology development and case study). Each collected paper is mapped in the appropriate sub-category. Based on the literature review, findings on current status of PSS evaluation and some promising future research directions are discussed.
In recent years, many companies including airlines and retailers have adapted customer loyalty programs to enhance the customer satisfaction and loyalty which can drive repurchase. However, they are not convinced that this loyalty program is really effective to enhance the customer loyalty. In this talk, we present a case study which investigates whether the loyalty program has an impact on enhancing its customer loyalty of an automobile manufacturer. The company's customer loyalty program (CLP), launched about four years ago, provides various leisure-related services as well as automobile maintenance services. For this research, an empirical study is conducted involving an on-line survey using its loyalty program members. Structural equation modeling (SEM) is used in building and analyzing quantitative models among CLP service indices and CLP performance indicators. As a result, CLP utilization intention, switching cost, customer satisfaction, brand preference, and brand trust have significant meaning to customer loyalty directly. Also, some service indices have significant effects on customer loyalty indirectly through CLP performance indicators. The outcome of this study can be used in devising a strategy for improving the company's loyalty program.
A prototype is often used as part of a new product development (NPD) process to allow product designers to explore design alternatives, test theories, and verify performance prior to starting production. Unlike in NPD, the concept of a prototype in new service development has been considered vague or atypical. s-Scape (service-Scape) is a virtual reality-based system for service prototyping and testing now under development in Korea. This paper introduces s-Scape development projects and includes a case study on the service quality evaluation of a used-car dealer using s-Scape. This paper also presents discussion issues and future work in evaluating the service quality in s-Scape.
This paper proposes an evaluation scheme for product-service system (PSS) models with a focus on constructing evaluation criteria. The proposed scheme has a four-layered hierarchical structure consisting of perspectives, dimensions, categories, and aspects. The sustainability and customer value perspectives are also considered. The sustainability perspective consists of the 3P (i.e., profit, planet, and people) dimensions, while the customer-value perspective consists of quality and cost dimensions. Each dimension is further classified into categories, and more detailed aspects about these are explained. The set of criteria consists of 5 dimensions, 22 categories, and 96 aspects in total. The proposed PSS evaluation scheme is demonstrated and validated through a hypothetical case study in the automobile industry. The scheme can serve as an effective aid in evaluating and eventually improving future PSS models.
This paper proposes an evaluation scheme for PSS with a focus on constructing evaluation criteria. The proposed scheme has a four-layered hierarchical structure. The four layers refer to perspectives, dimensions, categories, and aspects. The sustainability and customer value perspectives are considered. The sustainability perspective consists of 3P dimensions, while the customer-value perspective consists of quality and cost dimensions. Each dimension is further classified into categories, and finally into more detailed aspects. The scheme has 5 dimensions, 24 categories, and 87 aspects in total. The PSS evaluation scheme can serve as an effective aid in designing as well as evaluating a PSS.
In dual response surface optimization (DRSO), the mean and standard deviation responses are often in conflict. To obtain a satisfactory compromise, the preference information of a decision maker (DM) on the tradeoffs among the responses should be incorporated into the problem. Some existing works suggested an approach of minimizing weighted mean square error (WMSE) to incorporate the DM’s preference information. In WMSE approach, the DM provides his/her preference information by specifying weights of mean and standard deviation responses. The weights should be determined in accordance with the DM’s preference structure regarding the tradeoffs. However, it is often difficult to specify weights that are congruent with the DM’s preference structure without use of a systematic method. In this study, we develop an interactive weighting method to DRSO where the DM provides preference information in the form of pairwise comparisons. Our method does not require weights to be specified in advance. Instead, it uses the results of pairwise comparisons of the DM to estimate weights in an interactive manner. The required preference information is relevant and therefore easy for the DM to provide. The method is effective in that a highly satisfactory solution for the DM can be obtained through a few pairwise comparisons.
In multiresponse surface optimization, responses are often in conflict. To obtain a satisfactory compromise, the preference information of a decision maker (DM) on the tradeoffs among the responses should be incorporated into the problem. We propose an interactive method where the DM provides preference information in the form of pairwise comparisons. The results of pairwise comparisons are used to estimate the preference parameter values in an interactive manner. The method is effective in that a highly satisfactory solution can be obtained.
A loyalty program is now a popular marketing tool in various industries. It typically provides customers with loyalty incentives such as membership points redeemable for discounts or prizes and free additional services to induce customers’ repurchase. However, its effectiveness is not always guaranteed. In this talk, we present how the structural equation modeling (SEM) can be used to measure and analyze the effectiveness of a loyalty program. A case study on the loyalty program of an automobile manufacturing company is also presented.
Product-service system (PSS) is a novel type of business model integrating products and services in a single system. It has the potential to create economic, environmental, and social values to various stakeholders. This research proposes a PSS classification scheme with two dimensions, namely, the nature of stakeholder network and the change in value creation activities. More than one hundred PSS cases are classified across the two dimensions. The proposed scheme can serve as an effective basis for PSS evaluation and new PSS development.
Not available
In recent years, there has been a rise in popularity of loyalty programs in various industries including airlines, retailers, and financial institutions. Loyalty programs typically provide customers with loyalty incentives such as points redeemable for discounts or prizes. However, there has been some debate on whether loyalty programs have significant impacts on increasing customer loyalty. In this talk, we present a case study which evaluates the impact of a loyalty program of an automobile manufacturer. The company's loyalty program, launched about three years ago, provides various leisure-related services as well as automobile maintenance services. The purpose of the case study is to investigate whether the loyalty program has an impact on enhancing its customer loyalty; and if so, identify the service items havings important effects. An empirical study is conducted involving an on-line survey using its loyalty program members. Structural equation modeling (SEM) is used in building and analyzing quantitative models among service items, customer perception and satisfaction dimensions, and customer loyalty measures. The outcome of this study can be used in devising a strategy for improving the company's loyalty program.
Product-service system (PSS) provides a strategic alternative to product-oriented economic growth and severe price competition in the global market. The objective of this research is to develop a systematic methodology to generate concepts for new PSSs, called a PSS concept generation support system. The models and strategies of more than ninety existing PSS cases are analyzed, and the insights extracted from the analysis are used to facilitate the concept generation process. The generated PSS concepts, after some screening and elaboration, can evolve to new business models for PSS.
Product-service system (PSS) is a novel idea of integrating products and services in a single context. It provides a strategic alternative to product-oriented economic growth and severe price competition in the global market. The possible benefit of PSS includes value co-creation in the whole process of product life cycle and improved socio-economic sustainability. Such advantages would lead to innovation of products and services, and thereby enhancing the competitiveness of organizations. The purpose of our research is to build a methodology to support PSS idea development, with an emphasis on generation of innovative ideas. The methodology is generic enough to be applied to a variety of product-service system context. It would meet the latent and possibly conflicting needs of the stakeholders. In this talk, we present a methodology which consists of various tools and a systematic procedure to support the new PSS idea generation, called the ideation support system. The models and strategies of existing PSS cases are analyzed, and the insights extracted from the analysis are used to facilitate the idea generation process. Case study is conducted on a laundry industry to validate the methodology. The generated PSS ideas through this research can serve as a basis for developing new PSS.
Not Available
In multiresponse surface optimization (MRSO), responses are often in conflict. To obtain a satisfactory compromise, a Decision Maker (DM)’s preference information on the tradeoffs among the responses should be incorporated into the problem. In most existing works, the DM expresses the subjective judgment on the responses through a preference parameter before the problem-solving process, after which a single solution is obtained. In this study, we propose a posterior preference articulation approach to MRSO. The posterior preference articulation approach initially finds a set of nondominated solutions without the DM’s preference information, and then allows the DM to selects the best solution among the nondominated solutions. The posterior preference articulation approach has an advantage in that it does not require any information on the DM’s preference in advance. An interactive selection method based on pairwise comparison of the DM is adopted in our method to facilitate the DM’s selection process. The proposed method enables the DM to obtain a satisfactory compromise solution and gives him/her the opportunity to explore and better understand the tradeoffs among the responses. Examples show the illustration of the proposed method and several features of our approach. It is shown that the proposed method obtains the most preferred solution while minimizing the DM’s cognitive effort.
This paper presents a systematic guideline to support the idea generation for new product-service systems (PSS). The models and strategies of existing PSS cases are analyzed, and the insights extracted from the analysis are used to facilitate the idea generation process. The generated PSS ideas can serve as a basis for developing new PSS concepts.
A semi-conductor manufacturing process produces approximately 400 chips in a single wafer. The chips at different locations of a wafer are not of the same quality, due to some fixed effect by the location. This research aims to optimize a semi-conductor manufacturing process by finding the optimal setting of the input variables which minimizes the fixed effect while keeping the overall quality at some desirable level.
In multiresponse surface optimization (MRSO), responses are often in conflict. To obtain a satisfactory compromise, we propose a posterior preference articulation approach to MRSO. The proposed method initially finds a set of nondominated solutions, and then allows the DM to selects the best solution among the nondominated solutions. The proposed method enables the DM to obtain a satisfactory compromise and gives him/her the opportunity to better understand the tradeoffs among the responses.
We propose a method for optimization of a multistage manufacturing process, which considers missing-values occurrence in the observational process data. The proposed method is based on a data mining technique, called Patient Rule Induction Method (PRIM). The performance of the proposed method is demonstrated using a case from a semiconductor manufacturing process.
In dual response surface optimization, the mean and standard deviation responses are often in conflict. To obtain a satisfactory compromise, a Decision Maker (DM)’s preference information on the tradeoffs between the responses should be incorporated into the problem. In most existing works, the DM expresses the subjective judgment on the responses through a preference parameter before the problem-solving process, after which a single solution is obtained. In this study, we propose a posterior preference articulation approach to dual response surface optimization. The posterior preference articulation approach initially finds a set of nondominated solutions without the DM’s preference information, and then allows the DM to selects the best solution among the nondominated solutions. The proposed method enables the DM to obtain a satisfactory compromise solution with minimum cognitive effort and gives him/her the opportunity to explore and better understand the tradeoffs between the two responses.
Not Available
In a semi-conductors manufacturing process, hundreds of chips are manufactured in an identical wafer, but different locations within the wafer. The qualities of chips at different locations have different quality, because there are some fixed effects depending on the location. In this research, the semi-conductor manufacturing process is optimized, so the optimal conditions for input variables, which can minimize the variation of fixed effects as well as the bias from target value in quality, are found.
New service development (NSD) is a very important activity in creating and enhancing value for the existing customers as well as attracting new customers. NSD is a crucial phase to the success of the service, and at the same time, must deal with highly challenging issues. Nonetheless, NSD has not received due attention in the literature. This paper first reviews the existing efforts toward the systematic support of NSD and then discusses three promising research issues in this regard, namely, acceleration of NSD processes, IT support for NSD, and systematic development of product-service systems.
Faced with mounting competitive pressures, many companies are attempting to raise their profile in the market by offering new services. As a result, far greater attention has been accorded in the last few years to new service development and service engineering by researchers and businesses alike. However, if we consider the standard suggestions and approaches for developing new services from the original idea up to the market launch it is conspicuous that testing of development results has been largely neglected in the past. This paper describes how service testing can be realized in practice and presents a possible technique - ServLab - for simulating services with the help of virtual reality and service theatre.
This paper develops an IPTV service quality model which consists of three layers of features, namely, QoE, QoS and NP. The key features and their relationships are identified via two-phase quality functions deployment. The issues on the improvement of the IPTV service quality are also presented.
Quality functions deployment (QFD) provides a specific approach for ensuring quality throughout each stage of the product development and production process. It has been proven to be useful in reducing the product development cycle time, while simultaneously improving product quality and delivering the product at a lower cost. Since the focus of QFD is placed on the early stage of product development, the uncertainty in the input information of QFD is inevitable. If the uncertainty is neglected, the QFD analysis results are likely to be misleading. It is necessary to equip practitioners with a new QFD methodology that can model, analyze, and dampen the effects of the uncertainty and variability in a systematic manner. This paper proposes an extended version of QFD methodology, called Robust QFD, which is robust to the uncertainty of the input information and the resulting variability of the QFD output. In Robust QFD, the uncertainty of the input information is first modeled quantitatively. Utilizing the modeled uncertainty, the variability of the QFD output is formally analyzed. Given the variability of the QFD output, its robustness is evaluated and improved. In order to concrete the methodology, some corresponding research issues - 'Development of a robust prioritization method', 'Development of robustness indices', and 'Development of strategies to improve the robustness of the decisions' - are also presented.
This research develops a service quality model for IPTV(Internet Protocol Television) services. The model consists of three layers of features, namely, QoE, QoS and NP. The key features and their relationships are identified via two-phase quality functions deployment. The issues on the improvement of the IPTV service quality are also presented.
In practice, the uncertainty in the input information of QFD is inevitable. This paper proposes an extended QFD methodology, called robust QFD, which considers the uncertainty of the input information and the resulting variability of the QFD output. The methodology consists of four phases - uncertainty modeling, variability derivation, EC prioritization, Robustness evaluation and improvement. The methodology is demonstrated via a case study.
Not Available
In dual response surface optimization, the mean and the standard deviation responses are often in conflict. To obtain a satisfactory compromise, a Decision Maker (DM)’s preference information on the tradeoffs between the responses should be incorporated into the problem. In most of the existing works, the DM expresses the subjective judgment on the responses through a preference parameter before the problem solving process, and then a single solution is obtained. In this work, we propose a posterior preference articulation approach to dual response surface optimization. The posterior preference articulation approach first finds a set of nondominated solutions without the DM’s preference information, and then allows the DM to selects the best one among the nondominated solutions.
In dual response surface optimization, the mean and the standard deviation responses are often in conflict. To obtain a satisfactory compromise in such a case, a Decision Maker (DM)’s preference information on the tradeoffs between the responses should be incorporated into the problem. In this work, we propose a posterior preference articulation approach to obtain the best compromise in dual response surface optimization. The proposed method first generates a comprehensive set of nondominated points. Then, it prioritizes the generated nondominated points according to the DM’s preference structure. The proposed method allows the DM to have an opportunity for understanding the tradeoffs between the responses. Therefore, the DM may choose the best compromise that is faithful to his or her preference.
The new service development (NSD) process defines the what and the how of a new service. This paper proposes a systematic model for NSD process. The proposed model consists of three phases, namely, service concept development, service process design, and service performance verification. A special focus is placed on improving the rapidity of the NSD process.
This paper proposes an extended QFD planning model for goal attainment that considers longitudinal effect. In the longitudinal effect case, the level of goal is determined by a series of effects over a certain period of time rather than by the effect at a specific point of time. In the proposed model, the longitudinal effect is incorporated by introducing a time dimension into the existing house of quality structure. The proposed model is illustrated using a case in high-speed internet service.
Most of work for optimization of multistage manufacturing processes assume that data have been collected and suitable models have been built. However, a good model is not easy to obtain from operational data. This paper proposes an alternative approach which aims to optimize multistage manufacturing processes. The proposed method is an extension of the patient rule induction method to determine the optimal control bounds of the process variables and the optimal tolerances of the quality characteristics. Its major characteristics are discussed via a simulation study.
In dual response surface optimization, the mean and the standard deviation responses are often in conflict. To obtain a satisfactory compromise, a Decision Maker (DM)’s preference information on the tradeoffs between the responses should be incorporated into the problem. In most of the existing works, the DM’s preference information is incorporated into the problem prior to solving the problem. In this work, we propose the dual response surface optimization using a posterior preference articulation method. The posterior preference articulation method first finds a set of nondominated solutions without the DM’s preference information, and then enables the DM to choose the best one among the nondominated solutions.
As the service sector is rapidly growing, one of the challenges faced by the service industries is the lack of effective methodologies for new service development (NSD). The NSD process defines the what and the how of a new service that differentiate it from the competition. In essence, the identification of a right service concept and the right service process design in a timely manner is the core of a NSD process. Notwithstanding the importance of the NSD process, the development of a systematic approach to NSD has scarcely been addressed. In practice, the NSD has largely relied upon the knowledge and experience of the engineers without the support of a well-defined process. This paper proposes a systematic model for NSD, with an emphasis on improving its rapidity. The model consists of three phases ?service concept development (Phase I), service process design (Phase II), and service performance verification (Phase III). Phase I develops new service concepts which would fill the gap between the latent customer needs and the services currently available. Phase II converts a service concept into an implementable design of the service process using standardized IT-based architecture. Phase III then verifies and optimizes the performance of the service process in a virtual environment. Due to the integrated approach with an aid of IT modeling techniques, the proposed model allows a rapid as well effective NSD process. The proposed model will be demonstrated through a case study in the IT-based knowledge-intensive service industry. A special focus is placed on developing innovative and high value-added services in a short timeframe. The model proposed in this paper can be well utilized as a template for an effective development of new services in the service industries where a rapid development of new services is crucial.
An extended QFD planning model is presented for selecting design requirements (DRs) that consider longitudinal effect. In the proposed model, the longitudinal effect is incorporated by introducing a time dimension into the existing house of quality structure. As a consequence of explicitly considering the longitudinal effect, the proposed model yields not only an optimal set of DRs but also the timing of their selection. The proposed model is demonstrated through a case study for improving customer loyalty in the high-speed internet service.
QFD has been widely used across various areas on decision-makings. However, the prioritization, an important basis for decision-makings, in conventional QFD may be misleading, since it does not consider the uncertainty of the input information. In order to avoid the misled prioritizations, in-depth study on robustness to the uncertainty is needed. For this motivation, this paper proposes two robustness indices to evaluate the robustness of prioritization decisions and proposes a method for robust prioritization. Two robustness indices are developed to evaluate the robustness of prioritization with two perspectives, absolute ranking of ECs and priority relationship among ECs, respectively. Based on the indices, robust prioritization can be conducted. The robust prioritization would identify ECs or priority relationship among ECs that have high robustness on the perspective of absolute ranking or priority relationship, respectively.
Most of the works in multiresponse surface methodology have been focusing mainly on the optimization issue, assuming that suitable models have been built. Though crucial for optimization, a good empirical model is not easy to obtain from the manufacturing process data. An alternative approach is to find the optimum condition on input variables directly without an explicit model. This approach essentially relies on the historical data, and it can be called a data mining approach. This paper proposes a data mining approach to multiresponse problems. The overall procedure of the proposed approach is presented. Then, the proposed approach is illustrated via a steel manufacturing problem.
Quality functions deployment (QFD) provides a specific approach for ensuring quality throughout each stage of the product development and production process. It has been proven to be useful in reducing the product development cycle time, while simultaneously improving product quality and delivering the product at a lower cost. Since the focus of QFD is placed on the early stage of product development, the uncertainty in the input information of QFD is inevitable. If the uncertainty is neglected, the QFD analysis results are likely to be misleading. It is necessary to equip practitioners with a new QFD methodology that can model, analyze, and dampen the effects of the uncertainty and variability in a systematic manner. Robust QFD is an extended version of QFD methodology, which is robust to the uncertainty of the input information and the resulting variability of the QFD output. In the Robust QFD, the uncertainty of the input information is first modeled quantitatively. Utilizing the modeled uncertainty, the variability of the QFD output is formally analyzed. Given the variability of the QFD output, its robustness is evaluated. Finally, effective strategies for improving the robustness are identified. This paper discusses recent research issues in Robust QFD. The major issues are related with the prioritization of the engineering characteristics, robustness evaluation and improvement, and web-based Robust QFD optimizer. Our recent research results on some of the issues will also be presented.
Quality functions deployment (QFD) provides a specific approach for ensuring quality throughout each stage of the product development and production process. Since the focus of QFD is placed on the early stage of product development, the uncertainty in the input information of QFD is inevitable. If the uncertainty is neglected, the QFD analysis results are likely to be misleading. This paper proposes an extended version of QFD methodology, called Robust QFD, which is capable of considering the uncertainty of the input information and the resulting variability of output. The proposed methodology aims to model, analyze, and dampen the effects of the uncertainty and variability in a systematic manner. In order to support the proposed methodology, a web-based software for QFD has been developed based upon the Robust QFD methodology. The software will be helpful for novices as well as for the experts in the Robust QFD methodology.
As the service sector is rapidly growing, one of the challenges faced by the service industries is the lack of effective methodologies for new service development. The service concept, which defines the what and the how of a new service, plays a key role in new service development. In essence, the identification of a right service concept is the core of a new service development process. Notwithstanding the importance of the service concept, the development of a systematic approach to new service concept generation has scarcely been addressed. In practice, the new service concept generation has largely relied upon the knowledge and experience of the engineers, not customers, without the support of a well-defined process. This paper proposes a systematic framework for developing new service concepts, with an emphasis on the customer’s perspective. The framework consists of three phases ? identification of customer needs (Phase 1), extraction of new service opportunities (Phase 2), and generation of new service concepts (Phase 3). Phase 1 identifies the latent customer needs by observing the typical life pattern of the target customers. Phase 2 compares the latent customer needs and the services currently available. The gap between the two sets represents the unsatisfied needs and thus constitutes the new service opportunities. Phase 3 then generates the new service concepts which would fill the gap. A step-by-step procedure is provided for each phase. The proposed framework is demonstrated through a case study in the telecommunications industry. A special focus is placed on generating innovative, value-added, internet-based, convergence-type service concepts. In the case study, a survey was conducted on ten customers to identify the latent customer needs; 61 new service opportunities were extracted; and 129 new service concepts were generated. The framework proposed in this paper can be well utilized as a template for an effective development of new services, not just in the telecommunications industry but also in any service industry where the customer-centered service concepts are crucial.
Quality functions deployment (QFD) provides a specific approach for ensuring quality throughout each stage of the product development and production process. It has been proven to be useful in reducing the product development cycle time, while simultaneously improving product quality and delivering the product at a lower cost. Since the focus of QFD is placed on the early stage of product development, the uncertainty in the input information of QFD is inevitable. If the uncertainty is neglected, the QFD analysis results are likely to be misleading. It is necessary to equip practitioners with a new QFD methodology that can model, analyze, and dampen the effects of the uncertainty and variability in a systematic manner. This paper proposes an extended version of QFD methodology, called Robust QFD, which is robust to the uncertainty of the input information and the resulting variability of the QFD output. In the proposed framework, the uncertainty of the input information is first modeled quantitatively. Utilizing the modeled uncertainty, the variability of the QFD output is formally analyzed. Given the variability of the QFD output, its robustness is evaluated. Finally, effective strategies for improving the robustness are identified.
The project objectives, called critical-to-quality (CTQs) in six sigma, should be defined to faithfully reflect the customer requirements. This paper proposes a systematic method for generating CTQ candidates in the DFSS (Design for Six Sigma) context. The unique characteristics of the proposed method are demonstrated via a case study.
Multiresponse surface optimization requires the decision maker’s preference information on the tradeoffs among multiple responses. This paper proposes an integration of the desirability functions method and the interactive optimization method, where the desirability functions is determined and optimized in an interactive manner.
The conventional QFD methodology assumes that all the input information is certain. However, in practice, the uncertainty in the input information is inevitable. This paper proposes an extended QFD methodology, called robust QFD, which considers the uncertainty of the input information and the resulting variability of the QFD output.
Quality functions deployment (QFD) provides a specific approach for ensuring quality throughout each stage of the product development and production process. It has been proven to be useful in reducing the product development cycle time, while simultaneously improving product quality and delivering the product at a lower cost. Since the focus of QFD is placed on the early stage of product development, the uncertainty in the input information of QFD is inevitable. If the uncertainty is neglected, the QFD analysis results are likely to be misleading. It is necessary to equip practitioners with a new QFD methodology that can model, analyze, and dampen the effects of the uncertainty and variability in a systematic manner. This paper proposes an extended version of QFD methodology, called Robust QFD, which is robust to the uncertainty of the input information and the resulting variability of the QFD output. In the proposed framework, the uncertainty of the input information is first modeled quantitatively. Utilizing the modeled uncertainty, the variability of the QFD output is formally analyzed. Given the variability of the QFD output, its robustness is evaluated. Finally, effective strategies for improving the robustness are identified.
The high-speed internet service, based on the ADSL (asymmetric digital subscriber lines) or VDSL (very high-speed digital subscriber lines) technology, has achieved a remarkable increase in penetration in recent years. Until recently (say, until 2002), the companies in this market believed their competitiveness comes from a fast acquisition of new customers, a typical characteristic of new telecommunication services. However, as this market gets saturated, customer retention has become more critical than new customer acquisition. In the high-speed internet service industry, as in any other service industry, the high level of service performance is a differentiator in competition, and in fact, an effective way to improve customer satisfaction and loyalty. The service performance in high-speed internet service consists of two dimensions -- network performance and customer-service performance. According to a recent study, network performance is considered about four times more important than customer-service performance. The objective of this paper is to identify the causal relationship among network performance, customer satisfaction, and customer loyalty in the high-speed internet service context. The major finding from the relationship model is that the speed-related network performance measures have highly significant and large effects on customer satisfaction. In particular, the upload speed has the largest effect among the network performance measures.
This paper proposes a new framework for variability analysis in QFD. The proposed framework considers the uncertainty of the information contained in the house of quality chart and analyzes the variability resulting in the outcome. A new scenario for prioritizing engineering characteristics is presented and illustrated using a case example.
Mean squared error (MSE) is an effective criterion to combine the mean and the standard deviation responses in dual response surface optimization. The bias and variance components of MSE need to be weighted properly in the given problem situation. This paper proposes a systematic method to determine the relative weights of bias and variance in accordance with a decision maker’s prior and posterior preference structure.
Not Available
The selection of the critical variables, called critical-to-process (CTP), is a key factor for the success of a quality improvement initiative. A systematic procedure for selecting CTPs via data mining is proposed. The proposed procedure is illustrated using a case example from a manufacturing process.
Not Available
A new loss functions-based method for multiresponse optimization is presented. The proposed method introduces predicted future responses in a loss functions, which turns out to accommodate both robustness and quality of predictions as well as bias in a single framework. Properties of the proposed method are revealed via two illustrative examples. It has been shown that the proposed method gives more reasonable results than the existing methods when both robustness and quality of predictions are important issues.
Not Available
Mean squared error (MSE) is an effective criterion to combine the mean and the standard deviation responses in the dual response surface approach. MSE is the sum of bias and variance, which need to be weighted under certain circumstances. This paper proposes a novel method to assess the relative weights of bias and variance in MSE. The proposed method utilizes the pairwise ordering of the bias-variance vectors for the weight assessment. The proposed method will be illustrated through an example problem, and its characteristics will also be discussed.
Not Available
Not Available
A new statistical diagnosis method for a batch process is proposed. The proposed method consists of two phases: off-line model building and on-line diagnosis. The off-line model building phase constructs an empirical model, called a discriminant model, using various past batch runs. When an out-of-control state of a new batch is detected, the on-line diagnosis phase is initiated. The behavior of the new batch is referenced against the model, developed in the off-line model building phase, to make a diagnostic decision. The diagnosis performance of the proposed method is tested using a dataset from a PVC batch process. It has been shown that the proposed method outperforms existing PCA-based diagnosis methods, especially at the onset of a fault.
Quality functions deployment (QFD) is a useful tool for identifying a project topic, called a critical-to-quality, in the early stage of a six sigma project. In this talk, we first review the limitations of the existing QFD framework, and then present some recent extensions from the methodological perspective to enhance its usefulness in practice. In particular, the validation of the information contained in a house of quality chart and the effective use of the collected information will be discussed.
Response surface methodology (RSM) consists of a group of techniques used in the empirical study of the relationship between the response and a number of input variables. Consequently, the experimenter attempts to find the optimal setting for the input variables that maximizes (or minimizes) the response. Most of the work in RSM has been focused on the case where there is only one response of interest. A common problem in product or process design, however, is the selection of optimal parameter levels which involve simultaneous consideration of multiple response variables, called a multiresponse problem. Various attempts have been made to model and solve a multiresponse problem, including priority-based approach, desirability functions approach, and loss functions approach. In this talk, we first formally define a multiresponse problem, and thens review the existing work in this field. The strengths and weaknesses of the existing methods will be presented. Finally, some promising topics for future research will be discussed.
Not Available
To solve the diagnosis problem of a batch process, a new statistical diagnosis method based on Fisher discriminant analysis (FDA) is proposed in this work. The proposed method utilizes FDA to produce an empirical model, against which future behavior of a new batch is referenced in order to determine an assignable cause of a fault. To construct the empirical model, called a discriminant model, we utilize various batch runs with its assignable causes identified. The diagnosis performance of the proposed method is demonstrated using the data from PVC batch processes. The proposed method is shown to produce a reliable diagnosis performance and stable one especially at the onset of a fault.
Step method is one of the well-known multi-objective optimization (MOO) techniques. The basic idea is to solve a multi-objective optimization problem through an interactive procedure between a model and a decision maker. The model of this method generates a local solution under the given constraints representing the decision maker’s preference information. Then, the decision maker provides some additional information to the model as to whether he or she satisfied with the result or not. The method, however, does not consider the differing degrees of satisfaction with the solution, that is, how much he or she is satisfied. In this paper, we will propose a modified method which uses a fuzzy modeling concept to resolve this limitation. The advantages and characteristics of the proposed method will be discussed by applying it to a multi-response surface problem.
Quality Functions Deployment (QFD) is a concept and mechanism for translating the "voice of the customer" through the various stages of product planning, engineering, and manufacturing into a final product. Notwithstanding the rapid growth of the QFD literature, development of systematic procedures for an effective use of QFD has scarcely been addressed. In this paper, we first review the limitations of the existing QFD framework, and then present some recent extensions from the methodological perspective to enhance its usefulness in practice. In particular, the validation of the information contained in a house of quality chart and the effective use of the collected information are discussed.
Not Available
A pattern-based diagnosis method is proposed to diagnose a process on-line. We utilize principal component analysis to model and monitor the variability of a process. A triangular representation of process trends in the principal component space is employed to represent various pattern of each fault. These fault patterns are compared with each of the fault library. Likelihood index for cause candidate is introduced, and based on it we make a diagnostic decision to selects the cause candidate with the highest likelihood index as an assignable cause. The proposed method is demonstrated using simulated data from Tennessee Eastman process. Several comparative studies on diagnosis resolution and robustness to noise are presented.
A common problem encountered in product or process design is the selection of optimal parameters that involves simultaneous consideration of multiple response characteristics, called a multiple response surface problem. There are several approaches proposed for multiple response surface optimization (MRO), including the desirability functions approach and the loss functions approach. The existing MRO approaches require that all the preference information of the decision maker be extracted prior to solving the problem. However, the prior preference articulation is difficult to implement in practice. This paper proposes to use an interactive optimization approach to the MRO problem to overcome the common limitations of the existing approaches. In particular, we demonstrate the use of the Step method, one of the well-known interactive optimization methods in the multiple objective optimization literature, in solving an MRO problem.
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available
Not available