Abstract
A wafer bin map (WBM) represents the locational information of defective chips on the wafer. The spatial correlation of defects on the wafer provides crucial information for the root cause diagnosis of defects in wafer fabrication. The spatial correlation is classified as a defect pattern for efficient diagnostics. A defect pattern taxonomy should be defined in advance for coherent classification of defect patterns. Various taxonomies are used in previous studies, but they share common limitations in that the differentiation among defect patterns is unclear, the set of predefined defect patterns is insufficient, and they cannot accommodate newly-emerged defect patterns. A concept of spatial dimension-based defect pattern taxonomy and its development procedure are proposed. Defect patterns are defined by three spatial dimensions, namely, Shape, Size, and Location. The development procedure is applied to a major NAND flash memory semiconductor manufacturer for two years. Results show that spatial dimension-based taxonomy can improve the performance of the defect pattern classification system by alleviating common existing limitations. Moreover, meaningful defect patterns for diagnostics are retained through the engineers’ involvement in the development procedure.
Abstract
According to the increase in electricity consumption in residential buildings, advanced metering infrastructures (AMIs) have been widely installed in residential buildings. AMIs help households to monitor and reduce their electricity consumption. Although the usefulness of AMIs has been validated, there still exist several barriers to their widespread use. This study considers household’s information privacy concerns (IPCs) and perceived electricity usage habits (PEUHs) as barriers to AMI penetration. AMI data on electricity usage may reveal extensive information about households, such as indoor behaviors and appliance types. This information may cause households to be concerned about invasion of privacy, and such concerns may affect AMI usage intentions.In terms of PEUHs, electric power companies argue that households with undesirable electricity usage habits,such as large amount of electricity usage with unpredictable usage patterns, are prospective AMI customers. This study develops IPCs and PEUHs scales and conducts a path analysis based on the framework of technology acceptance model to validate the influence of IPCs and PEUHs on AMI usage intention. The identified effects of IPCs and PEUHs on AMI usage intention are expected to provide practical information for understanding how IPCs and PEUHs influence AMI usage intention from the perspective of households.
Abstract
The utilization of automatic test equipment (ATE) in a wafer test is crucial because it helps maintain optimal temperatures and simultaneously provides electrical power for multiple chips. Ensuring the proper functioning of ATE is essential to avoid productivity losses, such as extended wafer testing times and potential malfunctions. Although previous research has investigated the detection of ATE malfunctions using yields, probe cards, and electrical characteristic data, these approaches exhibit limitations in data characteristics and are not suitable for continuous monitoring. This study introduces a novel approach for detecting abnormal behavior within ATE by using event log data and an autoencoder. Autoencoders have demonstrated efficacy in identifying abnormal behavior, making them apt for detecting such behavior within ATE using event log data. To develop this approach, data analysts and wafer test engineers collaborated to establish a practical methodology that encapsulates the operational characteristics of ATE. This methodology comprises four distinct steps. The proposed approach was assessed using simulated event log data and was proven effective in detecting abnormal behavior within ATE. Furthermore, the approach was applied to real-world ATE data from a semiconductor company, identifying abnormal temperature control and monitoring behavior. This detection can lead to reduced wafer test times and contribute to environmental protection efforts.
Abstract
A wafer consists of several chips, and serial electrical tests are conducted for each chip to investigate whether the chip is defective. A bin indicates the test results for each chip with information on which tests the chip failed. A wafer bin map (WBM) shows the locations and bins of the defects on the wafer. WBMs showing spatial patterns of defects usually result from assignable causes in the wafer fabrication process; hence, they should be classified in advance. The existing defect-pattern taxonomies do not consider bins, although useful information can be obtained from them. We propose a taxonomy that consists of the shape, size, location, and bin dimensions. The bin dimension is developed using Bin2Vec method, which determines RGB (red-green-blue) code for each bin according to the spatial similarity between bins. Three levels of the bin dimension are defined by analyzing a large number of WBMs using Bin2Vec and clustering methods. Compared with the existing taxonomies, the proposed taxonomy has the advantage of identifying major bins of defect patterns, new defect patterns, and non-critical defect patterns. A high-quality training dataset was obtained using the proposed taxonomy; consequently, a defect pattern classification model with satisfactory classification performance could be obtained.
Abstract
As electricity consumption in residential buildings has increased, advanced metering infrastructures (AMIs) have been widely installed in residential buildings. AMIs measure and collect metering data of electricity (AMI data) consumed in each household. Companies, such as electricity suppliers and service and policy developers, have used AMI data to develop various services and policies. They rely on actual electricity consumption from AMI data to understand households’ needs. However, the actual consumption may be different from household’s perceived consumption. Household’s perceived consumption could be affected by their respective characteristics, such as demographic and psycho-social characteristics. Accordingly, this research examines a difference between household’s actual and perceived electricity consumption and determines household characteristics that affect the difference. For achieving this goal, this research uses AMI data (actual consumption) and survey data (perceived consumption) from 142 households in South Korea. Results confirm the existence of the difference and the influence of specific household characteristics on the difference between actual and perceived electricity consumptions. House and household sizes affect actual consumption, whereas behaviors that interact with electrical appliances affect perceived consumption. The findings would be useful for the companies in developing services and policies that satisfy household requirements.
Abstract
Currently, many manufacturing companies are obtaining a large amount of operational data from manufacturing lines due to advances in information technology. Thus, various data mining methods have been applied to analyze the data to optimize the manufacturing process. Most of the existing data mining-based optimization methods assume that the relationships between input and response variables do not change over time. However, because it often takes a long time to collect a large amount of operational data, the relationships may change during the data collection. In such a case, the operational data is regarded as time-series data and recent data should be regarded to be more important than old data. In this study, we employed a patient rule induction method (PRIM), which is one of the data mining methods applied for process optimization. In addition, we employed an exponentially weighted moving average (EWMA) statistic to assign a larger weight to the recent data. Based on the PRIM and EWMA, the proposed method attempts to obtain optimal intervals for input variables where current performance of the response is better. The proposed method is illustrated with a hypothetical example and validated through a real case study of a steel manufacturing process.
Abstract
A wafer consists of several chips, and a wafer map shows the locations of defective chips on the wafer. The locational pattern of defective chips on the wafer map provides crucial information for improving the semiconductor wafer fabrication process. Recently, automatic defect pattern classification using convolutional neural networks (CNN) has become popular because of its good classification performance. The good performance is guaranteed only when a large amount of well-balanced training and test data is available. However, such data are difficult to obtain in real practice because the training and test data are obtained by manual inspection. In this paper, we propose a systematic method to resolve the small and imbalanced wafer map data issues. Specifically, we first selected wafers showing clear defect patterns and then replicated them by randomly applying horizontal flip, vertical flip, and rotation. In the case study, we obtained real wafer map data from a semiconductor wafer company. By applying the proposed method, a large amount of well-balanced data was obtained. The CNN model for defect classification was fitted to the obtained data, and it showed good classification performance.
Abstract
The conventional approach for optimizing multiresponse is fitting multiple response surface models and then analyzing them to obtain optimal settings for the input variables. However, it is difficult to obtain reliable response surface models when dealing with large amounts of data. In this article, a new approach to multiresponse optimization based on a classification and regression tree method is presented. Desirability functions are employed to simultaneously optimize the multiple responses. The case study of steel manufacturing company with large amounts of data shows that the proposed method obtains an optimal region in which multiple responses are simultaneously optimized.
Abstract
In a multistage manufacturing process, identical machines are utilized at each process stage. According to such configuration, each product can go through a different process path (hereafter referred to as path) during production. In practice, however, machines at a process stage may have different operational performances, and thus the quality of the final product may vary depending on the path. This research proposes an approach to derive the golden paths (GPs), which are expected to produce products of higher quality than the user-defined quality level, when the machine performance degrades over time. To this end, the health indicator (HI) of a machine, which represents its performance at a certain time, is introduced to reflect the performance degradation. In deriving GPs, the proposed approach discovers a specific set of machine sequence patterns (MSPs), sequential combinations of machines in different process stages, satisfying several rules. Based on the discovered MSPs and the HI information, the proposed approach generates the GPs at a particular future point in time. The viability of the proposed approach is evaluated through simulated experiment. The derived GPs reflecting machine performance degradation over time are expected to contribute in increasing the production of superior quality products and reducing the re-production costs incurred by utilizing non-GPs.
Abstract
A multistage process consists of sequential consecutive stages. In this process, each stage has multiple responses and is affected by its preceding stage, while at the same time, affecting the following stage. This complex structure makes it difficult to optimize the multistage process. Recently, it became easy to obtain a large amount of operational data from the multistage process due to development of information technologies. The proposed method employs a data mining method called a classification and regression tree for analyzing the data and desirability functions for simultaneously optimizing the multiresponse. To consider the relationship between stages, a backward optimization procedure which treats the multiresponse of the preceding stage as the input variables is proposed. The proposed method is described using a steel manufacturing process example and is compared with existing multiresponse optimization methods. The case study shows that the proposed method works well and outperforms the existing methods.
Abstract
To solve multiple response optimization problems that often involve incommensurate and conflicting responses, a robust interactive desirability function approach is proposed. The proposed approach consists of a parameter initialization phase and calculation and decision-making phases. It considers a decision maker’s preference information regarding tradeoffs among responses and the uncertainties associated with predicted response surface models. The proposed method is the first to consider model uncertainty using an interactive desirability function approach. It allows a decision maker to adjust any of the preference parameters, including the shape, bound and target of a modified robust function with consideration of model uncertainty in a single and integrated framework. This property of the proposed method is illustrated using a tire-tread compound problem, and the robustness of the adjustments for the approach is also considered. Thus, the new method is shown to be highly effective in generating a compromise solution that is faithful to the decision maker’s preference structure and robust to uncertainties associated with model predictions.
Abstract
Background: A lifelogs-based wellness index (LWI) is a function for calculating wellness scores based on health behavior lifelogs (eg, daily walking steps and sleep times collected via a smartwatch). A wellness score intuitively shows the users of smart wellness services the overall condition of their health behaviors. LWI development includes estimation (ie, estimating coefficients in LWI with data). A panel data set comprising health behavior lifelogs allows LWI estimation to control for unobserved variables, thereby resulting in less bias. However, these data sets typically have missing data due to events that occur in daily life (eg, smart devices stop collecting data when batteries are depleted), which can introduce biases into LWI coefficients. Thus, the appropriate choice of method to handle missing data is important for reducing biases in LWI estimations with panel data. However, there is
a lack of research in this area.
Objective: This study aims to identify a suitable missing-data handling method for LWI estimation with panel data.
Methods: Listwise deletion, mean imputation, expectation maximization–based multiple imputation, predictive-mean matching–based multiple imputation, k-nearest neighbors–based imputation, and low-rank approximation–based imputation were comparatively evaluated by simulating an existing case of LWI development. A panel data set comprising health behavior lifelogs of 41 college students over 4 weeks was transformed into a reference data set without any missing data. Then, 200 simulated data sets were generated by randomly introducing missing data at proportions from 1% to 80%. The missing-data handling methods were each applied to transform the simulated data sets into complete data sets, and coefficients in a linear LWI were estimated for each complete data set. For each proportion for each method, a bias measure was calculated by comparing the estimated coefficient values with values estimated from the reference data set.
Results: Methods performed differently depending on the proportion of missing data. For 1% to 30% proportions, low-rank approximation–based imputation, predictive-mean matching–based multiple imputation, and expectation maximization–based multiple imputation were superior. For 31% to 60% proportions, low-rank approximation–based imputation and predictive-mean matching–based multiple imputation performed best. For over 60% proportions, only low-rank approximation–based imputation performed acceptably.
Conclusions: Low-rank approximation–based imputation was the best of the 6 data-handling methods regardless of the proportion of missing data. This superiority is generalizable to other panel data sets comprising health behavior lifelogs given their verified low-rank nature, for which low-rank approximation–based imputation is known to perform effectively. This result will guide missing-data handling in reducing coefficient biases in new development cases of linear LWIs with panel data.
Abstract
Real-time collection of household electricity consumption data has been facilitated by advanced metering infrastructure. In recent studies, collected data have been processed to provide information on household appliance usage. The noise caused by electrical appliances from neighboring households constitutes a major issue, which is related to discomfort and even mental diseases. The assessment of noise discomfort using electricity consumption data has not been dealt with in the literature up to this day. In this study, a method, which utilizes electricity consumption data for the assessment of noise discomfort levels caused by electrical appliances between neighboring households, is proposed. This method is based on the differences in the usage time of electrical appliances in a collective residential building. The proposed method includes the following four steps: data collection and preprocessing, residential units clustering, noise discomfort modeling, and evaluation of noise discomfort. This method is demonstrated through a case study of a campus apartment building. Variations in the noise discomfort assessment model and measures for alleviating noise discomfort are also discussed. The proposed method can guide the application of electricity consumption data to assessment and alleviation of noise discomfort from home appliances at an apartment building.
Abstract
A multistage manufacturing process (MMP) consists of several consecutive process stages, each of which has multiple machines performing the same functions in parallel. A manufacturing path (simply referred to as path) is defined as an ordered set indicating a record of machines assigned to a product at each process stage of an MMP. An MMP usually produces products through various paths. In practice, multiple machines in a process stage have different operational performances, which accumulate during production and affect the quality of products. This study proposes a heuristic approach to derive the golden paths that produce products whose quality exceeds the desired level. The proposed approach consists of the searching phase and the merging phase. The searching phase extracts two types of machine sequence patterns (MSPs) from a path dataset in an MMP. An MSP is a subset of the path that is defined as an ordered set of assigned machines from several process stages. The two extracted types of MSPs are: 1) superior MSP, which affects the production of superior-quality products, and 2) inferior MSP, which affects the production of inferior-quality products, called inferior MSP. The merging phase derives the golden paths by combining superior MSPs and excluding inferior MSPs. The proposed approach is verified by applying it to a hypothetical path dataset and the semiconductor tool fault isolation (SETFI) dataset. This verification shows that the proposed approach derives the golden paths that exceed the predefined product quality level. This outcome demonstrates the practical viability of the proposed approach in an MMP.
Abstract
Most manufacturing industries produce products through a series of sequential stages, known as a multistage process. In a multistage process, each stage affects the stage that follows and it often has multiple response variables. This paper suggests a new procedure for optimizing a multistage process with multiple response variables. It searches for an optimal setting of input variables directly from operational data according to a patient rule induction method to maximize a desirability function, to which multiple response variables are converted. The proposed method is explained by a step-by-step procedure using a steel manufacturing process example.
Abstract
This study compares the performances of the generalized confidence interval (GCI) and the modified sampling distribution (MSD) approaches in evaluating the capability of processes with one-sided tolerance under the presence of gauge measurement errors (GME). The performance of both approaches are measured through a series of simulation. In terms of coverage rates (CRs), GCI and MSD approaches appear to work satisfactorily in the presence of GME since the CRs of the lower confidence bound with considering GME were all close to the nominal value. Furthermore, some of the coverage rates with considering the GME of GCI approach were smaller than those of MSD approach. GCI approach has better ability to assess process capability in the presence of GME. Based on numerical results, both approaches can be recommended to practitioners who assess process performances for cases with one-sided tolerance when GME is actually inevitable.
Abstract
A multistage process consists of sequential stages where each stage is affected by its preceding stage, and it in turn affects the stage that follows. The process described in this paper also has several input and response variables whose relationships are complicated. These characteristics make it difficult to optimize all responses in the multistage process. We modify a data mining method called the patient rule induction method and combine it with desirability function methods to optimize the mean and variance of multiresponse in the multistage process. The proposed method is explained by a step-by-step procedure using a steel manufacturing process example.
Abstract
Online-to-offline (O2O) integration refers to the incorporation of separate online and offline service processes into a single service delivery. Advances in mobile devices and information and communication technology enable the O2Ointegration, which has been applied to many services. This study proposes a new service blueprint, called the O2O Service Blueprint (O2O SB), which is specialized in visualizing and analyzing the service processes of the O2O integration. A comprehensive literature review and text mining analysis are conducted on massive quantities of literature, articles, and application introductions to understand characteristics of the O2O integration and extract keywords relevant to the O2O integration. Comparisons of the O2O SB with the conventional service blueprint and Information Service Blueprint validate that our proposal can address the limitations of existing service blueprints. An evaluation through expert interviews confirms the completeness, utility, and versatility of the O2O SB. The proposed O2O SB presents a complete picture of the entire service process, whether online or offline. This SB helps users systematically understand the processes and formulate strategies for service improvement.
Abstract
Semiconductor wafers are fabricated through sequential process steps. Some process steps have a strong relationship with wafer yield, and these are called critical process steps. Because wafer yield is a key performance index in wafer fabrication, the critical process steps should be carefully selected and managed. This paper proposes a systematic and data-driven approach for identifying the critical process steps. The proposed method considers troublesome properties of the data from the process steps such as imbalanced data, missing values, and random sampling. As a case study, we analyzed hypothetical operational data and confirmed that the proposed method works well.
Abstract
In the semiconductor wafer fabrication process, wafers go through a series of sequential process steps. Typically, each process step has several machines, and the wafers are assigned to one whenever they enter the process step. When assigning wafers to machines, it is important to consider both the quality and productivity perspectives. Major semiconductor companies in Korea have recently implemented a wafer assignment system to improve wafer yield, a critical measure for semiconductor quality. This system, however, does not consider the productivity perspective. This paper presents a systematic method for assigning wafers to maximize the wafer yield while satisfying a predetermined target level of productivity. A simple hypothetical example is presented to illustrate the method.
Abstract
Purpose: The Fourth Industrial Revolution (4th IR) affects the mode of company management. In this paper, a revised social responsibility (SR) model is presented as an evaluation tool for corporate social responsibility (CSR) performance for sustainable organizational growth in the era of the 4th IR.
Design/methodology/approach: To develop an SR model that can be used well in the era of the 4th IR, the key references are “ISO 26000: Guidance on Social Responsibility” and “the Global Reporting Initiative (GRI) Guidelines.” For ISO 26000 and the GRI guidelines, see the homepages in the References section. On the basis of these guidelines, a new SR model for sustainable development in the 4th IR is developed in this paper.
Findings: For a new SR model in the 4th IR, the concepts of management quality, quality responsibility, creating shared value, social value and open data and open quality management (QM) are incorporated into the existing International Organization for Standardization (ISO) 26000 evaluation criteria.
Originality/value: The 4th IR is changing the concepts of both QM and SR. To the best of the authors’ knowledge, the new concept of SR is yet to be discussed extensively. In this paper, a new SR model is suggested to reflect the characteristics of the 4th IR.
Abstract
Purpose: The proliferation of customer-related data provides companies with numerous service opportunities to create customer value. This study develops a framework to use this data to provide services.
Design/methodology/approach: This study conducted four action research projects on the use of customer-related data for service design with industry and government. Based on these projects, a practical framework was designed, applied, and validated, and was further refined by analyzing relevant service cases and incorporating the service and operations management literature.
Findings: The proposed customer process management (CPM) framework suggests steps a service provider can take when providing information to its customers to improve their processes and create more value-in-use by using data related to their processes. The applicability of this framework is illustrated using real examples from the action research projects and relevant literature.
Originality/value: “Using data to advance service” is a critical and timely research topic in the service literature. This study develops an original, specific framework for a company’s use of customer-related data to advance its services and create customer value. Moreover, the four projects with industry and government are early CPM case studies with real data.
Abstract
Service design is a multidisciplinary area that helps innovate services by bringing new ideas to customers through a design-thinking approach. Services are affected by multiple factors, which should be considered in designing services. In this paper, we propose the multi-factor service design (MFSD) method, which helps consider the multi-factor nature of service in the service design process. The MFSD method has been developed through and used in five service design studies with industry and government. The method addresses the multi-factor nature of service for systematic service design by providing the following guidelines: (1) identify key factors that affect the customer value creation of the service in question (in short, value creation factors), (2) define the design space of the service based on the value creation factors, and (3) design services and represent them based on the factors. We provide real stories and examples from the five service design studies to illustrate the MFSD method and demonstrate its utility. This study will contribute to the design of modern complex services that are affected by varied factors.
Abstract
Mobile health (mHealth) services support the continuous health-related monitoring, feedback, and behavior modification of individuals and populations through the use of personal mobile communication devices. Poor service quality is a major reason why many users have discontinued using mHealth services. However, only a few studies have identified the critical quality components for continuance intention. The current study aims to identify the crucial quality dimensions for users’ continuance intention in an mHealth service called Onecare. This service provides various forms of support for the day-to-day health behavior monitoring of college students by utilizing daily behavior data. In this research, five major quality dimensions of mHealth services, namely, content quality, engagement, reliability, usability, and privacy, were derived from existing studies. The effect of each quality dimension on continuance intention was estimated by analyzing the survey responses of 191 Onecare service users. The quality dimension with the most considerable effect on continuance intention was determined to be engagement followed by content quality and reliability. By contrast, the effects of usability and privacy on continuance intention were insignificant. Furthermore, this study found that the optimal quality management strategy can change depending on the objective, i.e., to increase continuance intention or satisfaction. These results will help mHealth service managers allocate their limited resources to effectively and efficiently improve continuance intention. Future research is required to verify if the findings of this study are generalizable to any population because the sample used in this work was specific to Korean college students.
Abstract
Numerous companies in manufacturing industries have “servitized” their value propositions to address issues on product commoditization and sustainability. A key component of servitization is informatics, which transforms product and customer data into information for customers. In this study, informatics-based service is defined as a type of service wherein informatics is crucial to customer value creation. Despite the importance of this concept, studies on the design of informatics-based services in manufacturing industries are rare. This paper reports on two case studies on such designs. Informatics-based services have been designed for a major Korean automobile manufacturer and the Korea Transportation Safety Authority (TS) based on their large vehicle-related databases. The first case study with the automobile manufacturer aims to design vehicle operations and health management services for passenger vehicle drivers while the second study with TS focuses on the design of driving safety enhancement services for commercial vehicle (i.e., bus, taxi, and truck) drivers. Based on the case studies, this paper discusses various aspects of informatics-based service design in manufacturing industries. This study would assist researchers and practitioners in designing new informatics-based services and contribute to promoting and inspiring research on intelligent services in manufacturing industries under the current information economy.
Abstract
The South Korean government retains valuable health-related data of nearly all citizens. This data set includes insurance data, diagnosis history data, treatment history data, and medical examination data. This paper presents a case study with the National Health Insurance Service of South Korean government to develop service concepts that utilize such health-related data. A service concept indicates what to offer to customers and how to offer it. In the case study, 138 service ideas that utilize health-related data were generated based on literature and focus group interviews with 14 experts. The ideas were evaluated by 20 experts via focus group interviews. Eight new service concepts were designed based on the ideas, and then evaluated by 19 experts and 612 citizens. The service concepts are expected to improve the national healthcare system of South Korea. This paper also presents key dimensions of health-related data utilization services and issues in developing such services, which were identified based on the case study. The original contribution of this study is the development of health-related data utilization service concepts at a national scale. This work will serve as a foundational reference for future research aimed at developing such services.
Abstract
A product-service system (PSS) is an integrated bundle of products and services, which aims at creating customer utility and enhancing manufacturers’ competitiveness. Given that products and services are integrated over a specific cycle, PSS lifecycle has become a key notion in PSS research. However, most studies focus on PSS lifecycle from a provider perspective although the essence of PSS is to create customer utility. The current work proposes a customer-oriented model of PSS lifecycle as a solid basis for analyzing PSS from a customer perspective. A two-phase procedure was conducted to develop the model. In phase 1, customer activities in 118 PSS cases were analyzed to identify three specific types of PSS lifecycle (product-, use-, and result-oriented PSS). In phase 2, one general type of PSS lifecycle was analyzed by analyzing three specific types. In addition, their usefulness in PSS research was shown through an example on car-related PSS cases. The proposed model comprises one general and three specific types of PSS lifecycle. Four stages (“plan,” “decide,” “solve,” and “end”) were formed in the general type while seven stages were formed for product-oriented PSS, six for use-oriented PSS, and five for result-oriented PSS. The proposed model can be utilized to supplement the existing studies to consider the PSS mechanism from a customer perspective. A concrete notion of PSS lifecycle from a customer perspective may contribute to customer-oriented PSS innovations.
Abstract
Cities worldwide are attempting to transform themselves into smart cities. Recent cases and studies show that a key factor in this transformation is the use of urban big data from stakeholders and physical objects in cities. However, the knowledge and framework for data use for smart cities remain relatively unknown. This paper reports findings from an analysis of various use cases of big data in cities worldwide and the authors’ four projects with government organizations toward developing smart cities. Specifically, this paper classifies the urban data use cases into four reference models and identifies six challenges in transforming data into information for smart cities. Furthermore, building upon the relevant literature, this paper proposes five considerations for addressing the challenges in implementing the reference models in real-world applications. The reference models, challenges, and considerations collectively form a framework for data use for smart cities. This paper will contribute to urban planning and policy development in the modern data-rich economy.
Abstract
Smart wellness services collect various types of lifelogs such as walking steps and sleep duration via smart devices. However, most of the existing smart wellness services focus on displaying each individual lifelog to users. Therefore, they have limitations on supporting overall and easy understanding of various lifelogs. A lifelogs-based daily wellness score (LDWS) is a useful tool to resolve such limitations. LDWS combines the lifelogs into a score to represent an overall level of daily health behaviors, thus, supporting overall and easy health behavior monitoring of users. This research developed LDWS as part of developing a smart wellness service for college students (SWSCS) in collaboration with an IT company. Lifelogs of 41 college students were collected through a four-week pilot run of SWSCS and were subsequently fitted to a random effects model. Based on the model estimates, LDWS was determined by linearly aggregating seven behavior variables. Utility of the developed LDWS was validated through a second pilot run of SWSCS. This paper also discusses the potential use of LDWS for SWSCS and the factors to be considered for developing a lifelogs-based wellness score for a smart wellness service. This research would contribute to advancing smart wellness services with lifelogs.
Abstract
Experience-centric service (ExS) is a type of service through which customers experience emotionally appealing events and activities that result in distinctive memory. The literature argues that ExS design should be a research priority in this experience economy, yet little is known on how to articulate ExSs in their design. This paper proposes a tool called Experience Design Board for visualizing an ExS delivery process as a basis for its analysis and design. The tool is a matrix-shaped board where the key factors of experience creation in ExS (namely, servicescape, frontstage employees, other customers, backstage employees, and technology support systems) are represented in rows, and the customer experience phases are placed in columns. The tool is useful in analyzing and designing how the key factors of ExS create customer experience. The tool integrates several work streams within the evolving ExS literature into its structure and is generic enough to accommodate various ExSs in physical and digital experience contexts. By visualizing an ExS delivery process from beginning to end, the designer can obtain a systematic understanding of the essential attributes of ExS and can use it for an effective design. This tool would serve as a basis for service design in this experience economy.
Abstract
Various types and massive amounts of data are collected in the automotive industry. Such data proliferation facilitates and improves the design of services for vehicle operations management (VOM). A VOM service is a service that helps drivers drive safely, conveniently, and pleasurably with the use of VOM-related data. Despite the applicability of big data to VOM service design, few efforts have been made to establish a big data-based design process for VOM services. To fill the research gap, this study proposes an approach to analyzing and utilizing VOM-related data for designing VOM services. The proposed approach aids service designers in designing VOM services by using VOM-related data. A case study on the design of an eco-driving service, a popular VOM service, is presented to demonstrate the feasibility and effectiveness of the approach. The proposed approach could facilitate the design of VOM services and provide a foundation for data-driven service innovations.
Abstract
We propose a two-step procedure based on data analytics to help service providers to efficiently and effectively implement a health promotion program to prevent hypertension. First, we developed a prediction model to identify people who are at risk for developing hypertension. Then, to eliminate specific risk factors for each of these individuals, we proposed four methods to create an index that represents the importance of each intervention program, which is a subprogram of the health promotion program. This index can be used to recommend appropriate intervention programs for each individual. We used the national sample cohort databases of South Korea to offer a case study of the implementation of the proposed procedure. The constructed prediction model using logistic regression has adequate accuracy, and the proposed index that uses different methods has similar results to those of a doctor. This two-step procedure by automatic modeling based on data will be useful to save human resources and to provide informative and personalized results based on individual healthcare records.
Abstract
Smart service systems are everywhere, in homes and in the transportation, energy, and healthcare sectors. However, such systems have yet to be fully understood in the literature. Given the widespread applications of and research on smart service systems, we used text mining to develop a unified understanding of such systems in a data-driven way. Specifically, we used a combination of metrics and machine learning algorithms to preprocess and analyze text data related to smart service systems, including text from the scientific literature and news articles. By analyzing 5,378 scientific articles and 1,234 news articles, we identify important keywords, 16 research topics, 4 technology factors, and 13 application areas. We define “smart service system” based on the analytics results. Furthermore, we discuss the theoretical and methodological implications of our work, such as the 5Cs (connection, collection, computation, and communications for co-creation) of smart service systems and the text mining approach to understand service research topics. We believe this work, which aims to establish common ground for understanding these systems across multiple disciplinary perspectives, will encourage further research and development of modern service systems.
Abstract
Service is a key context for the use of information technology (IT) as IT digitizes the information interactions within service and facilitates value creation, thereby contributing to service innovation. The recent proliferation of (big) data provides numerous opportunities for information-intensive services (IISs), in which information interactions have the most effect on value creation. In the modern data-rich economy, understanding mechanisms and related factors of the data-based value creation in IISs is essential in improving such services with IT. This study identifies nine key factors that characterize the data-based value creation: (1) data source, (2) data collection, (3) data, (4) data analysis, (5) information on the data source, (6) information delivery, (7) customer (information user), (8) value in information use, and (9) provider network. These nine factors were identified and defined on the basis of our action research through six projects with industry and government that used specific datasets to design new IISs as well as on the basis of analysis of data use in 149 IIS cases. This paper demonstrates the utility of the nine factors in describing, analyzing, and designing the full spectrum from data collection to value creation in IISs. Our main contribution is to provide a simple yet comprehensive and empirically tested basis for the use and management of data to facilitate service value creation in this data-rich economy.
Abstract
Various types and massive amounts of customer behavior data are collected in various industries, such as transportation, healthcare, hospitality, and logistics. The use of customer behavior data can improve the design activities of service firms. Despite the applicability of customer behavior data to service design, only a few studies have examined an approach to utilize customer behavior data in service design. This study proposes an approach for designing services with customer behavior data. The approach is based on a case study on eco-driving service design with the behavior data of bus drivers. This study extends the research on service design by demonstrating how customer behavior data are utilized for service design and assisting service designers in designing services with customer behavior data.
Abstract
A dual-response surface optimization approach assumes that response surface models of the mean and standard deviation of a response are fitted well to experimental data. However, it is often difficult to satisfy this assumption when dealing with a large volume of operational data from a manufacturing line. The proposed method attempts to optimize the mean and standard deviation of the response without building response surface models. Instead, it searches for an optimal setting of input variables directly from operational data by using a patient rule induction method. The proposed approach is illustrated with a step-by-step procedure for an example case.
Abstract
The proliferation of (big) data provides numerous opportunities for service advances in practice, yet research on using data to advance service is at a nascent stage in the literature. Many studies have discussed phenomenological benefits of data to service. However, limited research describes managerial issues behind such benefits, although a holistic understanding of the issues is essential in using data to advance service in practice and provides a basis for future research. Our objective is to address this research gap. Design/methodology/approach: “Using data to advance service” is about change in organizations. Thus, this study uses action research methods of creating real change in organizations together with practitioners, thereby adding to scientific knowledge about practice. The authors participated in five service design projects with industry and government that used different datasets to design new services. Findings: Drawing on lessons learned from the five projects, this study identifies empirically eleven managerial issues that should be considered in data-use for advancing service. In addition, by integrating the issues and relevant literature, this study offers theoretical implications for future research. Originality/value: “Using data to advance service” is a research topic that emerged originally from practice. Action research or case studies on this topic are valuable in understanding practice and in identifying research priorities by discovering the gap between theory and practice. This study used action research over many years to observe real-world challenges and to make academic research relevant to the challenges. We believe our empirical findings will help improve service practices of data-use and stimulate future research.
Abstract
A desirability functions approach has been widely used in Multi-Response Optimization (MRO) due to its simplicity. Most of the existing desirability functions-based methods assume that the variability of the response variables is stable; thus, they focus mainly on the optimization of the mean of multiple responses. However, this stable variability assumption often does not apply in practical situations; thus, the quality of the product or process can be severely degraded due to the high variability of multiple responses. In this regard, we propose a new desirability functions method to simultaneously optimize both the mean and variability of multiple responses. In particular, the proposed method uses a posterior preference articulation approach, which has an advantage in investigating tradeoffs between the mean and variability of multiple responses. It is expected that process engineers can use this method to better understand the tradeoffs, thereby obtaining a satisfactory compromise solution.
Abstract
User experience (UX) refers to the comprehensive experience of a user when interacting with a product. UX plays an essential role in enhancing the value of a product in the current marketplace. Compared with a feature phone, a smartphone enables users to significantly extend the usage of the device. Given the impressive market growth of the smartphone, evaluating its UX has become important in its development process. However, studies on the evaluation of smartphone UX are limited. Thus, we conducted a study on smartphone UX from the perspective of UX evaluation. At first, a total of 329 evaluation items for smartphone UX were identified based on literature review and user study, and they were categorized as product, context, and emotion items. Then, to utilize the items in the three categories, we proposed a two-phase procedure for UX evaluation consisting of identification of key items (Phase 1) and identification of causal relationships among the key items (Phase 2). As a case study, seven key contexts were identified and the relationships of key items were statistically identified based on 461 user data. The results of this study can help practitioners evaluate their smartphone UX in a systematic manner.
Abstract
Mobile service quality (m-SQ) is vital to manage the competitiveness of a company in the mobile business market. Existing studies on m-SQ share key characteristics of m-service in general, such as mobility and context awareness. A set of common m-SQ dimensions that reflects such key characteristics would serve as the theoretical backbone of m-SQ. This research aims to conduct a comprehensive review of existing studies on m-SQ scales and identify key dimensions of m-SQ to understand the essence of m-SQ scales. A total of 45 existing studies on m-SQ scales were reviewed and seven key dimensions of m-SQ scales were identified. This study is expected to serve as a solid knowledge base for conducting new investigations on m-SQ scale development as well as help practitioners utilize m-SQ scales.
Abstract
This study developed a procedure to determine the quality priorities of the internet protocol television (IPTV) service. First, a set of key elements of IPTV service quality was developed based on a literature review and a focus group interview. Second, analytic hierarchy process and the Kano model were applied to identify the requirements of experts and customers, respectively. The experts measured the importance and difficulty of management, whereas the customers measured the satisfaction level and importance of each quality element. Third, quality priorities were calculated through the entropy principle and scenario-based analysis. The proposed procedure is illustrated with a case study of a telecommunications company in Korea.
Abstract
Mobile location-based service (m-LBS) presents attractive business opportunities for various companies. Recent improvements of technologies have resulted in a dramatic growth of m-LBSs. However, development of scales for evaluating the m-LBS quality has scarcely been addressed. This study aims to develop a new scale applicable to evaluating m-LBS quality. The scale was qualitatively designed at first, and the designed scale was assessed with a total number of 281 responded survey data. As a result, a m-LBS quality scale was developed, which consists of 9 quality dimensions and 29 measurement items. The distinctive characteristic of m-LBS is captured by a newly defined dimension called “localization” and its three measurement items (namely, organization, update, and inclusiveness). The proposed scale was shown to be statistically reliable and valid. The results of this study would significantly contribute to providing a valid scale for use in measuring the m-LBS quality.
Abstract
In this paper, considering the uncertainty associated with the fitted response surface models and the satisfaction degrees of the response values with respect to the given targets, we construct the robust membership functions of the responses in three cases and explain their practical meanings. We translate the feasible regions of multiple responses optimization (MRO) problems into ∂-level sets and incorporate the model uncertainty with the confidence intervals simultaneously to ensure the robustness of the feasible regions. Then we develop the robust fuzzy programming (RFP) approach to solve the multiple responses optimization (MRO) problems. The key advantage of the presented method is that it takes account of the location effect, dispersion effect and model uncertainty of the multiple responses simultaneously and thus can ensure the robustness of the solution. An example from literatures is illustrated to show the practicality and effectiveness of the proposed algorithm. Finally some comparisons and discussions are given to further illustrate the developed approach.
Abstract
A product-service system (PSS) integrates products and services to fulfill customer needs and create sustainability. PSS evaluation requires the use of diverse criteria because PSSs are complex systems with multiple stakeholders and perspectives. This paper proposes an evaluation scheme for PSS models that consists of sets of 94 evaluation criteria and an evaluation procedure. The proposed sets of criteria encompasses both provider and customer perspectives, all of the 3P (profitability, planet, and people) values and various PSS lifecycle phases, whereas existing studies only partially cover these aspects of PSS. The proposed sets serves as an evaluation criterion repository, and users can easily identify the criteria relevant to the evaluation targets. Using the proposed sets is more efficient than starting from scratch. The proposed evaluation scheme can be used either to compare different PSS models or to evaluate a single model. Case studies show that the proposed scheme can sufficiently evaluate both existing and newly launched PSS models as well as models under development. The proposed scheme is expected to serve as an efficient and effective aid for practitioners in PSS development.
Abstract
Semiconductors are fabricated through unit processes including photolithography, etching, diffusion, ion implantation, deposition, and planarization processes. Chemical mechanical planarization (CMP), which is essential in advanced semiconductor manufacturing processes, aims to achieve high planarity across the wafer surface. This paper presents a case study in which the optimal blend of mixture slurry was obtained to improve the two response variables (material loss and roughness) at the same time. The mixture slurry consists of several pure slurries; when all of the abrasive particles within the slurry are of the same size, the slurry is referred to as a pure slurry. The optimal blend was obtained by applying a multi-response surface optimization method. In particular, the recently developed posterior approach to dual response surface optimization was employed, which allows the CMP process engineer to investigate trade-offs between the two response variables. The two responses were better with the obtained blend than the existing blend.
Abstract
Servicescape is one of the most important dimensions by which customers evaluate their shopping experience in a retail service. This research aimed to evaluate the servicescape design of the JDC Duty-free Shop in a systematic manner. The virtual reality (VR) model was used to visualize various options for the servicescape design. The preferred design was determined from experimental results and then applied to the servicescape redesign of the shop. This research supports the relationship between servicescape design and customer perception, as well as the effectiveness of a VR-based laboratory experiment in evaluating servicescape design.
Abstract
Information-intensive service (IIS) is a type of service in which information interactions have a highly significant effect on service value creation. Recent innovations in information and communication technology (IT) have facilitated the creation of various types of IT-enabled IIS (IT-IIS), in which IT is essential for information interactions. This article first introduces the generic composition of IIS value creation system. Viewing the composition of IIS value creation system from an IT-oriented perspective, this article then proposes classifications of various types of IT-IIS. Understanding of the generic composition and classifications of IT-IIS serves as a basis for designing new IT-IISs. This article also introduces two IT-IIS design case studies that the authors recently conducted. This article would help IT professionals pursue IT-enabled business innovation.
Abstract
Information-intensive service (IIS) is a type of service in which information interactions have the most effect on service value creation. Recent innovations of information and communication technology have created various types of IISs, and the literature argues that IIS should be a research priority in this information economy. This research proposes a new service blueprinting framework specialized to IISs, called Information Service Blueprint. The framework user can succinctly capture the big-picture and key points of the complex IIS process in question by blueprinting an IIS. Information Service Blueprint has served as a basis for blueprinting IISs in IIS design projects with industry and government. An experiment to compare Information Service Blueprint with the conventional Service Blueprint also confirms its utility for blueprinting IISs. This research would serve as a basis for analyzing and designing IISs.
Abstract
After Cleaning Inspection Critical Dimension (ACICD), one of the main variables in the etch process, affects the electrical characteristics of fabricated semiconductor chips. Its target value should be determined to minimize the bias and variability of these electrical characteristics. This paper presents a case study in which the target value of ACICD is determined by the dual response optimization method. In particular, the recently developed posterior approach to dual response optimization is employed allowing the analyst to determine easily the optimal compromise between bias and variability in the electrical characteristics. The performance at the obtained optimal ACICD setting has been shown to be better than that at the existing setting.
Abstract
A product-service system (PSS) is a novel type of business model that integrates products and services in a single system. It provides a strategic alternative to product-oriented economic growth and price-based competition in the global market. This research proposes a methodology to support the generation of innovative PSS concepts, called the PSS concept generation support system. The models and strategies of 118 existing PSS cases were analyzed, and the insights extracted were used to develop the methodology. The methodology consists of various tools and a systematic procedure to support the generation process. It is generic enough to be applied to a variety of PSS contexts. The methodology is demonstrated and verified via case studies on the washing machine and refrigerator industries. The proposed PSS concept generation support system can serve as an efficient and effective aid to PSS designers for new PSS development.
Abstract
The product-service system (PSS) is a system in which its integrated products and services jointly fulfill customer needs. The current research proposes a structured tool called the PSS Board to visualize the PSS process. This is a matrix board where the customer activities, state of the products, services, dedicated infrastructures, and partners are placed in rows, and the general PSS process steps are placed in columns. The visualized PSS on the board shows how the PSS provider and its partners aid customers’ job execution process. Previous PSS cases are visualized based on the proposed PSS Board; the utility of the PSS Board is also identified. The current research can serve as an effective basis to analyze PSS from the perspective of fulfilling customer needs, thus supporting companies in diagnosing and elaborating their respective PSSs.
Abstract
This study aims to design the strategy matrix for the product-service system (PSS) which integrates products and services. This study reviews major studies of the PSS; discusses the necessity for the PSS classification; proposes a new classification; and suggests a strategy matrix for the PSS. The results have practical implications for firms pursuing competitive advantage and sustainable growth.
Abstract
In dual response surface optimization, minimizing weighted mean squared error (WMSE) is a simple yet effective way of obtaining a satisfactory solution. To minimize WMSE, the weights of the squared bias and variance should be determined in advance. Determining the weights in accordance with the decision maker (DM)’s preference structure regarding the tradeoffs between the two responses is critical and difficult. In this study, we develop an interactive weighting method where the DM provides his/her preference information in the form of pairwise comparisons. Our method estimates the weights based on the pairwise comparisons in an interactive manner. The method obtains a satisfactory solution through several pairwise comparisons in the case examples that we tested.
Abstract
In multiresponse surface optimization, responses are often in conflict. To obtain a satisfactory compromise, the preference information of a decision maker (DM) on the tradeoffs among the responses should be incorporated into the problem. We propose an interactive method where the DM provides preference information in the form of pairwise comparisons. The results of pairwise comparisons are used to estimate the preference parameter values in an interactive manner. The method is effective in that a highly satisfactory solution can be obtained.
Abstract
Due to the rapid development of information technologies, abundant data have become readily available. Data mining techniques have been used for process optimization in many manufacturing processes in automotive, LCD, semiconductor, and steel production, among others. However, a large amount of missing values occurs in the data sets due to several causes (e.g., data discarded by gross measurement errors, measurement machine breakdown, routine maintenance, sampling inspection, and sensor failure), which frequently complicate the application of data mining to the data sets. This study proposes a new procedure for optimizing processes called missing values-Patient Rule Induction Method (m-PRIM), which handles the missing-values problem systematically and yields considerable process improvement, even if a significant portion of the data sets has missing values. A case study in a semiconductor manufacturing process is conducted to illustrate the proposed procedure.
Abstract
The responses in multiresponse surface optimization are often in conflict. To obtain a satisfactory compromise, the preference information of a decision maker (DM) on the tradeoffs among the responses should be incorporated into the problem. In most existing works, the DM is required to provide his/her preference information through preference parameters before solving the problem. However, extracting the preference parameter values representing the preference structure of the DM is often difficult. To overcome these difficulties, several alternative methods that do not require the preference information of the DM before solving the problem have been suggested. These alternative methods assess the preference parameters of the DM in a posteriori or progressive manner and are called posterior or interactive methods, respectively. This paper reviews specific types of posterior and interactive methods, which are referred to as solution selection methods. In solution selection methods, the DM provides his/her preference information in the form of solution selection. The required information is easy for the DM to provide.
Abstract
In multiresponse surface optimization (MRSO), responses are often in conflict. To obtain a satisfactory compromise, the preference information of a decision maker (DM) on the tradeoffs among the responses should be incorporated into the problem. In most existing work, the DM expresses a subjective judgment on the responses through a preference parameter before the problem-solving process, after which a single solution is obtained. In this study, we propose a posterior preference articulation approach to MRSO. The approach initially finds sets of nondominated solutions without the DM’s preference information, and then allows the DM to select-the best solution fromm among the nondominated solutions. nondominated solutions. An interactive selection method based on pairwise comparisons made by the DM is adopted in our method to facilitate the DM’s selection process. The proposed method does not require that the preference information be specified in advance. It is easy and effective in that a satisfactory compromise can be obtained through a series of pairwise comparisons, regardless of the type of the DM’s utility functions.
Abstract
To assure new services attain a certain level of quality, services should be developed and tested systematically like products or software. In practice, this is rarely the case, especially in regards to the testing of service concepts due to appropriate solutions, processes, and methodology seem to be missing. In this paper, the authors propose an approach to how service testing can be realized in practice and present supporting processes, methods, and technologies for testing services in laboratory environments.
Abstract
Photolithography in the semiconductor fabrication process is the core stage that determines the quality of semiconductor chips. The fabrication process is a batch process that causes variation in the quality of chips; thus, uniformity has always been an important goal of the process. This research is a case study on optimizing the photolithography stage to improve uniformity and the target achievement of critical dimension (CD), a quality measure of semiconductor chips. The case study finds the optimal setting of input variables in photolithography by applying multivariate normal linear (MVNL) modeling with operational data obtained by sampling during manufacturing. The predicted performance of the optimal setting is found to be close to the limit of improvement estimated based on the model. For practitioners, several issues that can be considered for better optimization are also provided.
Abstract
We explore an important problem in prioritizing product design alternatives, using a real-world case. Despite the importance of prioritization in the area of new product development, the development of systematic schemes has been limited and the concepts and methods developed in the decision analysis area do not seem to be used actively. Therefore, we propose a new method, referred to as the compromising prioritization technique, to prioritize the product design alternatives based on paired comparisons. It introduces type I and type II errors and compromises these two errors to arrive at a desirable order of alternatives. To accomplish this, the two indices of homogeneity and separation are developed together with a heuristic algorithm. A comparative study is also conducted to support our method for use in product development and analogous areas. We then demonstrate how to use the developed compromising prioritization technique using a case study on the asymmetric digital subscriber line (ADSL)-based high-speed internet service product.
Abstract
Dual response surface optimization considers the mean and the variation simultaneously. The minimization of Mean Squared Error (MSE) is an effective approach in dual response surface optimization. Weighted MSE (WMSE) is formed by imposing the relative weights, on the squared bias and variance components of MSE. To date, a few methods have been proposed for determining. The resulting fromm these methods is either a single value or an interval. This paper aims at developing a systematic method to choose a value when an interval of is given. Specifically, this paper proposes a Bayesian approach to construct a probability distribution of Once the probability distribution of is constructed, the expected value of can be used to form WMSE.
Abstract
Industries such as automotive, LCD, PDP, semiconductor and steel produce products through multistage manufacturing processes. In a multistage manufacturing process, performances of stages are not independent. Therefore, the relationship between stages should be considered when optimizing the multistage manufacturing process. This study proposes a new procedure of optimizing a multistage manufacturing process, called Multistage PRIM (Patient Rule Induction Method). Multistage PRIM extends the scope of process optimization fromm a single stage to the multistage process, and it can use the information encapsulated in the relationship between stages when maximizing each stage’s performance. A case study in a multistage steel manufacturing process is conducted to illustrate the proposed procedure.
Abstract
In dual response surface optimization, the mean and standard deviation responses are often in conflict. To obtain a satisfactory compromise, a Decision Maker (DM)’s preference information on the tradeoffs between the responses should be incorporated into the problem. In most existing works, the DM expresses the subjective judgment on the responses through a preference parameter before the problem-solving process, after which a single solution is obtained. In this study, we propose a posterior preference articulation approach to dual response surface optimization. The posterior preference articulation approach initially finds sets of nondominated solutions without the DM’s preference information, and thenm allows the DM to select-the best solution among the nondominated solutions. The proposed method enables the DM to obtain a satisfactory compromise solution with minimum cognitive effort and gives him/her the opportunity to explore and better understand the tradeoffs between the two responses.
Abstract
Because of the rapid growth of the service sector, an effective and systematic methodology for quality analysis and improvement is increasingly important. An internet protocol television (IPTV) service provides various contents on demand via television and sets-top box with internet line. Its global market size is rapidly growing, but there is a lack of systematic analysis about IPTV service quality. The purpose of this paper is to develop an IPTV service quality model and identify key features and their relationships. The quality functions deployment (QFD) method is applied in two phases. The results of QFD application are analyzed and key features and their relationships are identified. Moreover, examples of applying the analysis to improve IPTV service quality are developed for IPTV service providers. As the analysis provided is general and applicable to other services, this paper provides insights into improving service quality, not only for ITPV service providers, but also for service providers in many kinds of industries.
Abstract
Multiresponse optimization problems often involve incommensurate and conflicting responses. To obtain a satisfactory compromise in such a case, a decision maker (DM)’s preference information on the tradeoffs among the responses should be incorporated into the problem. This paper proposes an interactive method based on the desirability functions approach to facilitate the preference articulation process. The proposed method allows the DM to adjust any of the preference parameters, namely, the shape, bound, and target of a desirability functions in a single, integrated framework. The proposed method would be highly effective in generating a compromise solution that is faithful to the DM’s preference structure.
Abstract
The prioritization of engineering characteristics (ECs) provides an important basis for decision-making in QFD. However, the prioritization results in the conventional QFD may be misleading since it does not consider the uncertainty of input information. This paper develops two robustness indices and proposes the notion of robust prioritization that ensures the EC prioritization to be robust against the uncertainty. The robustness indices consider robustness fromm two perspectives, namely, the absolute ranking of ECs and the priority relationship among ECs. Based on the two indices, robust prioritization seeks to identify sets of ECs or a priority relationship among ECs in such a way that the result of robust prioritization is stable despite the uncertainty. Finally, the proposed robustness indices and robust prioritization are demonstrated in a case study conducted on the ADSL-based high-speed internet service.
Abstract
Most of the works in multiresponse surface methodology have been focusing mainly on the optimization issue, assuming that the data have been collected and suitable models have been built. Though crucial for optimization, a good empirical model is not easy to obtain fromm the manufacturing process data. This article proposes a new approach to solving the multiresponse problem directly without building a model?an approach called patient rule induction method for multiresponse optimization (MR-PRIM). MR-PRIM is an extension of PRIM to multiresponse problems. Three major characteristic features of MR-PRIM are discussed as the new approach is applied to the case of a steel manufacturing process.
Abstract
This paper proposes a variable influence (VI) index-based on-line method for diagnosing discontinuous processes. The VI index is developed using the concept of contribution plots, and can be used to explain the influence of a process variable on a specific fault. The proposed method consists of two phases: on-line VI model-building and on-line diagnosis via VI index comparison. In the on-line VI model-building phase, the on-line VI model is constructed using on-line fault data and used as a reference model for the on-line diagnosis of a new batch. The on-line diagnosis phase is triggered by an out-of-control signal of a new batch. It calculates the VI index values for new process data available at that time, which are compared with the on-line VI index values of the on-line VI model stored in the data/model base. The proposed method has the advantage that it does not require any process knowledge of operators and can automatically selects an assignable cause via the comparison of VI index values. A case study on a PVC batch process is conducted to demonstrate the diagnosis performance of the proposed method. The performance of the proposed method is also evaluated when an on-line mode is not considered in the proposed framework.
Abstract
Most of the works in multiresponse surface methodology have been focusing mainly on the optimization issue, assuming that the data have been collected and suitable models have been built. Though crucial for optimization, a good empirical model is not easy to obtain fromm the manufacturing process data. This paper proposes a new approach to solving the multiresponse problem directly without building a model-an approach called ‘patient rule induction method for multiresponse optimization (MR-PRIM)’. MR-PRIM is an extension of PRIM to multiresponse problems. Three major characteristic features of MR-PRIM are discussed as the new approach is applied to the case of a steel manufacturing process.
Abstract
Quality functions deployment (QFD) provides a specific approach for ensuring quality throughout each stage of the product development and production process. Since the focus of QFD is placed on the early stage of product development, the uncertainty in the input information of QFD is inevitable. If the uncertainty is neglected, the QFD analysis results are likely to be misleading. It is necessary to equip practitioners with a new QFD methodology that can model, analyze, and dampen the effects of the uncertainty and variability in a systematic manner. Robust QFD is an extended version of QFD methodology, which is robust to the uncertainty of the input information and the resulting variability of the QFD output. This paper discusses recent research issues in Robust QFD. The major issues are related with the determination of overall priority, robustness evaluation, robust prioritization, and web-based Robust QFD optimizer. Our recent research results on the issues are presented, and some of future research topics are suggested.
Abstract
An extended QFD planning model is presented for selecting design requirements (DRs) that consider longitudinal effect. In the proposed model, the longitudinal effect is incorporated by introducing a time dimension into the existing house of quality structure. As a consequence of explicitly considering the longitudinal effect, the proposed model yields not only an optimal sets of DRs but also the timing of their selection. The proposed model is demonstrated through a case study for improving customer loyalty in the high-speed internet service.
Abstract
A new type of desirability functions for a multiresponse problem is proposed. The proposed desirability functions, called an expected desirability functions, is defined as the average of the conventional desirability values based on the probability distribution of the predicted response variable. The major advantage of the proposed approach over the conventional desirability functions approach is that it considers the dispersion effects as well as the location effects of the responses. Moreover, it is shown that the proposed approach results in a higher process capability than the conventional desirability approach, especially in the asymmetric nominal-the-best-type response case.
Abstract
The sets of the engineering characteristics (ECs) in quality functions deployment (QFD) should be comprehensive enough to explain all the given customer requirements. The identification of such a sets of ECs, which is currently done using brainstorming in practice, is a challenging task. Notwithstanding the rapid growth of the QFD literature, development of a systematic procedure for identifying ECs has scarcely been addressed. This paper proposes a systematic method for generating EC candidates. By providing a step-by-step procedure, the proposed method ensures that all the important ECs are identified, the generated ECs are measurable, and subjective judgments are minimally required. Hence, the shortcomings associated with the existing practice based on brainstorming can be effectively overcome. The unique characteristics of the proposed method are also demonstrated via a case study.
Abstract
Quality functions deployment (QFD) provides a specific approach for ensuring quality throughout each stage of the product development. Since the focus of QFD is placed on the early stage of product development, the uncertainty in the input information of QFD is inevitable. If the uncertainty is neglected, the QFD analysis results can be misleading. This paper proposes an extended version of the QFD methodology, called Robust QFD, which is capable of considering the uncertainty of the input information and the resulting variability of the output. The proposed framework aims to model, analyze, and dampen the effects of the uncertainty and variability in a systematic manner. The proposed framework is demonstrated through a case study on the AD니-based high-speed Internet service.
Abstract
The high-speed internet service has achieved a remarkable increase in penetration in recent years. In order to survive in this competitive market, companies should continue to improve their service performance. The high level of service performance is believed to be an effective way to improve customer satisfaction and loyalty. This paper aims to identify the causal relationship among network performance, customer satisfaction, and customer loyalty in the high-speed internet service context. Using the data collected fromm 51 current users of a VD니 service in Korea, this paper derives two types of the causal relationship models, namely, cross-sectional model and longitudinal model. The modeling results are discussed fromm both descriptive and prescriptive perspectives.
Abstract
As the service sector is rapidly growing, one of the challenges faced by the entire service industries is the lack of effective methodologies for quality analysis and improvement. In service industries, the service quality serves as both a customer retention tool and a business differentiator in local and global competition. This paper aims at developing a systematic framework for service quality analysis and improvement. The proposed framework advantageously integrates quality functions deployment (QFD) and structural equation modeling (SEM). More specifically, the framework utilizes QFD to collect, organize, and analyze qualitative information. The results of QFD are used as the basis for developing a service quality improvement strategy. The SEM is employed in building and analyzing quantitative models to devise a detailed strategy for the improvement. The proposed framework is demonstrated through a case study on the asymmetric digital subscriber lines (ADSL) service of a major telecommunication company in Asia. This framework can be utilized for an effective analysis and improvement of service quality not just in the telecommunication industry, but also in any service industry which collects customer satisfaction and service performance data as part of its daily operation.
Abstract
An integrated modeling approach to simultaneously optimizing both the location and dispersion effects of multiple responses is proposed. The proposed approach aims to identify the setting of input variables to maximize the overall minimal satisfaction level with respect to both location and dispersion of all the responses. The proposed approach overcomes the common limitation of the existing multiresponse approaches, which typically ignore the dispersion effect of the responses. Several possible variations of the proposed model are also discussed. Properties of the proposed approach are reveled via a real example.
Abstract
This paper addresses the issue of how to determine the physical elements of a product which should be included in the product platform, called the product platform elements, for mass customisation. Two types of indices, namely, the similarity index and the sensitivity index, are proposed to determine the product platform elements fromm the mass and the customisation perspectives, respectively. The physical elements with a large similarity index and a small sensitivity index are selected as the product platform elements. The proposed methodology is demonstrated via a case study. The product platform developed in the proposed methodology should be useful in accommodating various customer tastes while maintaining the cost efficiency of mass production.
Abstract
An empirical model-based framework for monitoring and diagnosing batch processes is proposed. With the input of past successful and unsuccessful batches, the off-line portion of the framework constructs empirical models. Using online process data of a new batch, the online portion of the framework makes monitoring and diagnostic decisions in a real-time basis. The proposed framework consists of three phases: monitoring, diagnostic screening, and diagnosis. For monitoring and diagnosis purposes, the multiway principal-component analysis (MPCA) model and discriminant model are adopted as reference models. As an intermediate step, the diagnostic screening phase narrows down the possible cause candidates of the fault in question. By analysing the MPCA monitoring model, the diagnostic screening phase constructs a variable influence model to screen out unlikely cause candidates. The performance of the proposed framework is tested using a real dataset fromm a PVC batch process. It has been shown that the proposed framework produces reliable diagnosis results. Moreover, the inclusion of the diagnostic screening phase as a pre-diagnostic step has improved the diagnosis performance of the proposed framework, especially in the early time intervals.
Abstract
As the functional characteristics of passenger vehicles reach satisfactory levels, customers’ concerns with the ergonomic and aesthetic aspects of the interior design have increased. The present study developed satisfaction models of automotive interior materials for six parts including crash pad, steering wheel, transmission gearshift knob, audio panel, metal grain inlay, and wood grain inlay. Based on literature survey, customer reviews on the web, and expert opinions, 8-15 material design variables were de?ned for the interior parts. The material design characteristics of 30 vehicle interiors were measured and customer satisfaction with the vehicle interiors was evaluated by 30 participants in the 20-30-year-old range. The material design variables were screened by evaluating their statistical, technical, and practical signi?cance and satisfaction models were developed by quanti?cation I analysis. The satisfaction models were used to identify relatively important design variables and preferred design features for the interior parts.
Abstract
Dual response surface optimization simultaneously considers the mean and the standard deviation of a response. The minimization of the mean squared error (MSE) is a simple, yet effective approach in dual response surface optimization. The bias and variance components of MSE need to be weighted properly if they are not of the same importance in the given problem situation. To date, the relative weights of bias and variance have been equally sets or determined only by the data. However, the weights should be determined in accordance with the tradeoffs on various factors in quality and costs. In this paper, we propose a systematic method to determine the weights of bias and variance in accordance with a decision maker’s preference structure regarding the tradeoffs.
Abstract
A new loss functions-based method for multiresponse optimization is presented. The proposed method introduces predicted future responses in a loss functions, which accommodates robustness and quality of predictions as well as bias in a single framework. Properties of the proposed method are illustrated with two examples. We show that the proposed method gives more reasonable results than the existing methods when both robustness and quality of predictions are important issues.
Abstract
This article proposes the variables repetitive group sampling plan where the quality characteristic follows normal distribution or lognormal distribution and has upper or lower specification limit. The problem is formulated as a nonlinear programming problem where the objective functions to be minimized is the average sample number and the constraints are related to lot acceptance probabilities at acceptable quality level (AQL) and limiting quality level (LQL) under the operating characteristic curve. Sampling plan tables are constructed for the selection of parameters indexed by AQL and LQL in the cases of known standard deviation and unknown standard deviation. It is shown that the proposed sampling plan significantly reduces the average sample number as compared with the single and double sampling plans.
Abstract
Step method (STEM) is one of the well-known multi-objective optimization techniques. STEM has proven to be effective in extracting a decision maker (DM)’s preference information for a satisfactory compromise. However, it has been criticized for not considering the differing degrees of satisfaction associated with an objective functions value, and for not providing flexible options in the process of preference information extraction. This paper proposes a modified STEM, called D-STEM, to overcome the methodological limitations of STEM. D-STEM utilizes the concept of a desirability functions to realistically model the differing degrees of satisfaction. D-STEM also allows a DM to choose either tightening or relaxation, which makes the preference articulation process more efficient and effective. The advantages of D-STEM are demonstrated through an illustrative example.
Abstract
A common problem encountered in product or process design is the selection of optimal parameter levels that involves the simultaneous consideration of multiple response characteristics, called a multi-response surface problem. Notwithstanding the importance of multi-response surface problems in practice. the development of an optimization scheme has received little attention. In this paper, we note that Multi-Response surface Optimization(MRO) can be viewed as a Multi-Objective Optimization(MOO) and that various techniques developed in MOO can be successfully utilized to deal with MRO problems. We also show that some of the existing desirability functions approaches can, in fact, be characterized as special forms of MOO. We thenn demonstrate some MOO principles and methods in order to illustrate how these approaches can be employed to obtain more desirable solutions to MRO problems.
Abstract
A pattern-based multivariate statistical diagnosis method is proposed to diagnose a process fault on-line. A triangular representation of process trends in the principal component space is employed to extract the on-line fault pattern. The extracted fault pattern is compared with the existing fault patterns stored in the fault library. A diagnostic decision is made based on the similarity between the extracted and the existing fault patterns, called a similarity index. The diagnosis performance of the proposed method is demonstrated using simulated data fromm Tennessee Eastman process. The diagnosis success rate and robustness to noise of the proposed method are also discussed via computational experiments.
Abstract
To ensure the safety of a batch process and the quality of its final product, one needs to quickly identify an assignable cause of a fault. Cho and Kim (2003) recently proposed a diagnosis method for batch processes using Fisher’s Discriminant Analysis (FDA), which showed a satisfactory performance on industrial batch processes. However, their method (or any other method based on empirical models) has a major limitation when the fault batches available for building an empirical diagnosis model are insufficient. This is a highly critical issue in practice because sufficient fault batches are likely to be unavailable. In this work, we propose a method to handle the insufficiency of the fault data in diagnosing batch processes. The basic idea is to generate so-called pseudo batches fromm known fault batches and utilise them as part of the diagnosis model data. The performance of the proposed method is demonstrated using a real data sets fromm a PVC batch process. The proposed method is shown to be capable of handling the data insufficiency problem successfully, and yields a reliable diagnosis performance.
Abstract
A variety of mobile phones are available to consumers. They differ fromm each other in many design features including shape, color, size, and material. This study attempts to identify some of the design features of a mobile phone critical to user satisfaction. Empirical models linking design features to satisfaction levels were developed and used to identify critical design features. Design properties common to “desirable” and “undesirable” phones were thenn extracted by comparing the values of the critical design features. The approach used in this study may help the product designers identify critical design features with their desirable properties in a systematic manner.
Abstract
A common problem encountered in product or process design is the selection of optimal parameters that involves simultaneous consideration of multiple response characteristics, called a multiple response surface (MRS) problem. There are several approaches proposed for multiple response surface optimization (MRO), including the priority-based approach, the desirability functions approach, and the loss functions approach. The existing MRO approaches require that all the preference information of a decision maker be articulated prior to solving the problem. However, it is difficult for the decision maker to articulate all the preference information in advance. This paper proposes an interactive approach, called an interactive desirability functions approach (IDFA), to overcome the common limitation of the existing approaches. IDFA focuses on extracting the decision maker’s preference information in an interactive manner. IDFA requires no explicit tradeoffs among the responses and gives an opportunity for the decision maker to learn his/her own tradeoff space. Consequently, through IDFA, it is more likely that the decision maker finds a solution which is faithful to his/her preference structure.
Abstract
Quality Functions Deployment (QFD) is a concept and mechanism for translating the “voice of the customer” through the various stages of product planning, engineering, and manufacturing into a final product. Notwithstanding the rapid growth of the QFD literature, development of systematic procedures for an effective use of QFD has scarcely been addressed. In this paper, we first review the limitations of the existing QFD framework, and thenn present a synopsis of the recent methodological enhancement on QFD.
Abstract
To ensure safety of a batch process and quality of its final product, one needs to quickly identify an assignable cause of a fault. To solve the diagnosis problem of a batch process, Cho and Kim6 proposed a new statistical diagnosis method based on Fisher discriminant analysis (FDA). They showed satisfactory diagnosis performance on industrial batch processes. However, the diagnosis method of Cho and Kim6 has a major limitation: it does not work when the fault data available for building the discriminant model are insucient. In this work, we propose a method to handle the insufficiency of the fault data in diagnosing batch processes. The diagnosis performance of the proposed method is demonstrated using a data sets fromm a PVC batch process. The proposed method is shown to be able to handle the data insufficiency problem, and yield reliable diagnosis performance.
Abstract
Batch processes play an important role in the production of low-volume, high-value products such as polymers, pharmaceuticals, and biochemicals. Multiway Principal Components Analysis (MPCA), one of the multivariate projection methods, has been widely used for monitoring batch processes. One major problem in the on-line application of MPCA is that the input data matrix for MPCA is not complete until the end of the batch operation, and thus the unmeasured portion of the matrix (called the “future observations”) has to be predicted. In this paper we propose a new method for predicting the future observations of the batch that is currently being operated (called the “new batch”). The proposed method, unlike the existing prediction methods, makes extensive use of the past batch trajectories. The past batch trajectory which is deemed the most similar to the new batch is selected fromm the batch library and used as the basis for predicting the unknown part of the new batch. A case study on an industrial PVC batch process has been conducted. The results show that the proposed method results in more accurate prediction and has the capability of detecting process abnormalities earlier than the existing methods.
Abstract
A new statistical online diagnosis method for a batch process is proposed. The proposed method consists of two phases: offline model building and online diagnosis. The offline model building phase constructs an empirical model, called a discriminant model, using various past batch runs. When a fault of a new batch is detected, the online diagnosis phase is initiated. The behaviour of the new batch is referenced against the model, developed in the offline model building phase, to make a diagnostic decision. The diagnosis performance of the proposed method is tested using a dataset fromm a PVC batch process. It has been shown that the proposed method outperforms existing PCA-based diagnosis methods, especially at the onset of a fault.
Abstract
A systematic modeling approach to describing, prescribing, and predicting usability of a product has been presented. Given the evaluation results of the usability dimension (UD) and the measurement of the product’s design variables, referred to as the human interface elements (HIEs), the approach enables one to systematically assess the relationship between the UD and HIEs. The assessed relationship is called a usability model. Once built, such a usability model can relate, in a quantitative manner, the HIEs directly to the UDs, and thus can serve as an effective aid to designers by evaluating and predicting the usability of an existing or hypothetical product. A usability model for elegance of audiovisual consumer electronic products has been demonstrated.
Abstract
Quality Functions deployment (QFD) is a cross-functional planning tool which ensures that the voice of the customer is systematically deployed throughout the product planning and design stages. One of the common mistakes in QFD is to perform analysis using an inconsistent house of quality (HOQ) chart. An inconsistent HOQ chart is one in which the information fromm the roof matrix is inconsistent with that fromm the relationship matrix. This paper develops a systematic procedure to check the consistency of an HOQ chart. The proposed consistency check can be performed prior to QFD’s main analysis to ensure the validity of the final results. A procedure for identifying the source of the inconsistency, if the HOQ chart should fail the consistency test, is also developed. The proposed procedures are illustrated through examples.
Abstract
It is now widely accepted within the industrial community that quality assurance is essential for future survival and competitiveness. However, most activities for improving quality in large-scale manufacturing processes have been performed offline, without being integrated into the conventional shop floor control system (SFCS). In general, the SFCS has focused only on efficient production planning and scheduling. The goal of the paper is to propose the functional framework of a quality-oriented shop floor control system (QSFCS) for large-scale manufacturing processes, and thenn to test this framework on a real process. The proposed system is composed of two auxiliary components, one for matching sensory data and quality inspection data (data matching) and the other for defining the expert knowledge of the operator (knowledge acquisition), and three primary components that entail building prediction and control models (model design), and analyzing, diagnosing, and optimizing the processes to improve product quality (feedforward adjustment, online adjustment). A partial least-square (PLS) method is employed to cope with the massive data sets online because of its minimal demands on measurement scales and sample size, and its ability to handle large numbers of highly correlated variables. The proposed framework is applied to the shadow mask manufacturing process, which consists of a few hundred process parameters and about 40 quality characteristics. The experimental case study shows that the quality deficiencies are reduced fromm 5% or 10% of occurrence to nearly 0%.
Abstract
Keywords: Quality Functions Deployment; House of Quality Chart; Weighting Scale; Normalization
Abstract
Quality functions deployment (QFD) is a cross-functional planning tool that ensures that the voice of the customer is systematically deployed throughout the product planning and design stages. Although many success applications of QFD have been reported worldwide, designers face impediments to the adoption of QFD as a product design aid. One of the dif culties associated with the application of QFD is the large size of a house of quality (HOQ) chart, which is the principal tool for QFD. It is well-known that it becomes more difficult and inefficient to manage a design project as the problem size becomes larger. This paper proposes to develop formal approaches to reducing the size of an HOQ chart using the concept of design decomposition. The decomposition approaches developed attempt to partition an HOQ chart into several smaller sub-HOQ charts which can be solved efficiently and independently. By decomposing a large HOQ chart into smaller sub-HOQ charts, the design team not only can enhance the concurrency of the design activities, but also reduce the amount of the time, effort, and cognitive burden required for the analysis. This would help to obviate the objections to the adoption of QFD as a product design aid and improve the efficiency of its use in practice.
Abstract
Usability defined in this study consists of the following two groups of dimensions: objective performance and subjective image/impression, which are considered equally important in designing and evaluating consumer electronic products. This study assumes that the degree of each usability dimension can be estimated by the design elements of the products. A total of 48 detailed usability dimensions were identified and defined in order to explain the usability concept applicable to the consumer electronic products. The user interface of the consumer electronic products was decomposed into specific design elements (defined as human interface elements: HIEs). A total of 88 HIEs were measured for 36 products by using a measurement checklist developed in this study. In addition, each usability dimension was evaluated by using the modified free modulus method. Multiple linear regression techniques were used to model the relationship between the usability and the design elements. As a result, 33 regression models were developed. The models are expected to help the designers not only identify important design variables but also predict the level of usability of a specific consumer electronic product. The approach used in this study is expected to provide an innovative and systematic framework for enhancing the usability of the consumer electronic products as well as other consumer products with minor modifications.
Abstract
Usability defined in this study consists of the following two groups of dimensions: objective performance and subjective image/impression, which are considered equally important in designing and evaluating consumer electronic products. This study assumes that the degree of each usability dimension can be estimated by the design elements of the products. A total of 48 detailed usability dimensions were identified and defined in order to explain the usability concept applicable to the consumer electronic products. The user interface of the consumer electronic products was decomposed into specific design elements (defined as human interface elements: HIEs). A total of 88 HIEs were measured for 36 products by using a measurement checklist developed in this study. In addition, each usability dimension was evaluated by using the modified free modulus method. Multiple linear regression techniques were used to model the relationship between the usability and the design elements. As a result, 33 regression models were developed. The models are expected to help the designers not only identify important design variables but also predict the level of usability of a specific consumer electronic product. The approach used in this study is expected to provide an innovative and systematic framework for enhancing the usability of the consumer electronic products as well as other consumer products with minor modifications.
Abstract
A modeling approach to optimize a multiresponse system is presented. The approach aims to identify the setting of the input variables to maximize the degree of overall satisfaction with respect to all the responses. An exponential desirability functional form is suggested to simplify the desirability functions assessment process. The approach proposed does not require any assumptions regarding the form or degree of the estimated response models and is robust to the potential dependences between response variables. It also takes into consideration the difference in the predictive ability as well as relative priority among the response variables. Properties of the approach are revealed via two real examples – one classical example taken fromm the literature and another that the authors have encountered in the steel industry.
Abstract
An economic procedure of selective assembly is proposed when a product is composed of two mating components. The major quality characteristic of the product is the clearance between the two components. The components are divided into several classes prior to assembly. The component characteristics are assumed to be independently and normally distributed with equal variance. The procedure is designed so that the proportions of both components in their corresponding classes are the same. A cost model is developed based on a quadratic loss functions and methods of obtaining the optimal class limits as well as the optimal number of classes are provided. Formulas for obtaining the proportion of rejection and the unavailability of mating components are also provided. The proposed model is compared with the equal width and the equal area partitioning methods using a numerical example.
Abstract
Taguchi parameter design is used extensively in industry to determine the optimal sets of process parameters necessary to produce a product that meets or exceeds customer expectations of performance while minimizing performance variation. The majority of research in Taguchi parameter design has concentrated on approaches to optimize process parameters based on experimental observation of a single quality characteristic. This paper develops a statistical method, the DMT method, to evaluate and optimize multiple quality characteristic problems. The method incorporates desirability functions, a performance statistic based on the mean squared error, and data-driven transformations to provide a systematic approach that is adjustable to variety of situations and easy for non-experts to apply. This paper presents the DMT method in a step-by-step format and applies the method to tow examples to illustrate its applicability to a variety of parameter design problems.
Not available
Abstract
In modern quality engineering, dual response surface methodology is a powerful tool. In this paper, we introduce a fuzzy modeling approach to optimize the dual response system. We demonstrate our approach in two examples and show the advantages of our method by comparing it with existing methods.
Abstract
Quality Functions Deployment (QFD) has been used to translate customer needs and wants into technical design requirements in order to increase customer satisfaction. QFD utilizes the house of quality (HOQ), which is a matrix providing a conceptual map for the design process, as a construct for understanding Customer Requirements (CRs) and establishing priorities of Design Requirements (DRs) to satisfy them. Some methodological issues occurring in the conventional HOQ are discussed, and thenn a new integrative decision model for selecting an optimal sets of DRs is presented using a modified HOQ model. The modified HOQ prioritization procedure employs a multi-attribute decision method for assigning relationship ratings between CRs and DRs instead of a conventional relationship rating scale, such as 1-3-9. The proposed decision model has been applied to an indoor air quality improvement problem as an illustrative example.
Abstract
This article presents a model for determining the composition of a United States peacekeeping force deployed to Bosnia. The model, which is loosely based on Quality Functions Deployment (QFD), uses three matrices in series to relate the interests of stakeholders involved in the conict to the composition of the US force deployed in Bosnia. In addition, we have used AHP to determine the weighted importance of various stakeholders and the intensity of the relationship between the variables involved in the model. The recommendations of the model are more or less validated by the actual force composition currently deployed in Bosnia. The advantage of the model is to add “fine tuning” and precision to an otherwise ad hoc decision making process concerning the deployment of armed forces. The model can be used in other force composition planning scenarios.
Abstract
A novice-friendly decision support system prototype for quality functions deployment (QFD) called QFD Optimizer is developed based upon an integrated mathematical programming formulation and solution approach. QFD Optimizer not only helps a design team build a house of quality chart, but also supports them in understanding and analyzing the system interrelationships, as well as obtaining optimal target engineering characteristic values. QFD Optimizer was tested experimentally and in a real design setting on students and practitioners to ascertain its potential viability and effectiveness. The results suggest that it has the potential to help users find improved feasible designs yielding higher customer satisfaction (i.e., improving quality of design) more rapidly (i.e., reduce the design cycle time), compared with the current manual, ad hoc approach, QFD Optimizer can be used by novice as well as expert users, and leads to a better understanding of complex interrelationships between customer needs and the engineering characteristics and among the engineering characteristics. Hence, it can and has been used as an effective quality improvement training tool, and shows promise for application in practice.
Abstract
Keywords: Quality Functions Deployment; Product Design Chracteristics; Factor Analysis
Abstract
Keywords: Quality Functions Deployment; Target Design Chracteristics; Optimization; Spreadsheet Factor Analysis
Abstract
Nonparametric linear regression and fuzzy linear regression have been developed based on different perspectives and assumptions, and thus there exist conceptual and methodological differences between the two approaches. This article describes their comparative characteristics such as basic assumptions, parameter estimation, and applications, and thenn compares their predictive and descriptive performances by a simulation experiment to identify the conditions under which one method performs better than the other. The experimental results indicate that nonparametric linear regression is superior to fuzzy linear regression in predictive capability, whereas their descriptive capabilities depend on various factors. When the size of the data sets is small, error terms have small variability, or when the relationships among variables are not well specified, fuzzy linear regression outperforms nonparametric linear regression with respect to descriptive capability. The conditions under which each method can be used as a viable alternative to the conventional least squares regression are also identified. The findings of this article would be useful in selecting the proper regression methodology to employ under specific conditions for descriptive and predictive purposes.
Abstract
Statistical linear regression and fuzzy linear regression have been developed fromm different perspectives, and thus there exist several conceptual and methodological differences between the two approaches. The characteristics of both methods, in terms of basic assumptions, parameter estimation, and application are descried and contrasted. Their descriptive and predictive capabilities are also compared via a simulation experiment to identify the conditions under which one outperforms the other. It turns out that statistical linear regression is superior to fuzzy linear regression in terms of predictive capability, whereas their comparative descriptive performance depends on various factors associated with the data sets (size, quality) and proper specificity of the model (aptness of the model, heteroscedasticity, autocorrelation, nonrandomness of error terms). Specifically, fuzzy linear regression performance becomes relatively better, vis-vis statistical linear regression, as the size of the data sets diminishes and the aptness of the regression model deteriorates. Fuzzy linear regression may thus be used as a viable alternative to statistical linear regression in estimating regression parameters when the data sets is insufficient to support statistical regression analysis and/or the aptness of the regression model is poor (e.g., due to vague relationship among variables and poor model specification).
Abstract
There are certain circumstances under which the application of statistical regression is not appropriate or even feasible because it makes rigid assumptions about the statistical properties of the model. Fuzzy regression, a nonparametric method, can be quite useful in estimating the relationships among variables where the available data are very limited and imprecise, and variables are interacting in an uncertain, qualitative, and fuzzy way. Thus, it may have considerable practical applications in many management and engineering problems. In this paper, the relationship among the H value, membership functions shape, and spreads of fuzzy parameters in fuzzy linear regression is determined, and the sensitivity of the spread with respect to the H value and membership functions shape is examined. The spread of a fuzzy parameter increases as a higher value of H and/or a decreasingly concave or increasingly convex membership functions is employed. By utilizing the relationship among the H value, membership functions, and spreads of the fuzzy parameters, a systematic approach to assessing a proper H parameter value is also developed. the approach developed and illustrated enables a decision maker’s beliefs regarding the shape and range of the possibility distribution of the model to be reflected more systematically, and consequently should yield more reliable and realistic results fromm fuzzy regression. The resulting regression equations could, for example, also be used as constraints in a fuzzy mathematical optimization model, such as in quality functions deployment.