The amount of health-related data is increasing. Recent advancements in Big Data analytics and related statistical and computational tools raised interest in data-driven healthcare services. Data-driven healthcare (DHC) service supports healthcare by providing data and analytics to create value for the customer. DHC has different business characteristics from conventional healthcare services in terms of scope, target, and player covered by the service. Despite the increasing importance of DHC, there is still limited research on DHC business models. As a result, it’s hard to know in detail how to use valuable health-related data. For companies, there is a need for definite comprehension of the properties of DHC services to develop an appropriate strategy and thus exploit new opportunities. Therefore, this study was conducted to find the main elements to be examined when designing and evaluating business models of DHC services. Therefore, the objective of this study is to produce a taxonomy to find key dimensions and their corresponding items of business models of the DHC service. This study employs an iterative taxonomy development method for taxonomy development to systematically derive a taxonomy that reflects both literature research and DHC service cases. This study derives nine dimensions and 42 corresponding items. For researchers, the proposed taxonomy derives dimensions that should be considered mainly in business model research of DHC. For practitioners, our taxonomy serves as a strategic management tool for designing and benchmarking existing DHC services business models.
In the semiconductor manufacturing process, wafers consist of multiple chips that are tested for quality before packaging. The data collected during this wafer test, which measures the electrical direct current voltage and characteristics of each chip, is known as wafer test data. However, missing values often occur in wafer test data due to factors such as faulty data acquisition sensors and intentional test skipping. This study presents a missing value imputation method that takes into account the spatial similarity among chips and the correlation between test items in wafer test data. The proposed method incorporates chip location information to capture the spatial tendencies of chips and modifies the loss functions of Generative Adversarial Imputation Nets to preserve correlations between test items before and after imputation. The effectiveness of the proposed method is demonstrated through the application of real-world wafer test data from a domestic semiconductor company, resulting in an improvement in imputation accuracy for over 80% of test items compared to five existing methods. This improved imputation method has the potential to increase wafer yield and efficiency in manufacturing quality management.
Advanced metering infrastructure (AMI) is an integrated system of smart meters, communication networks, and data management systems. AMI allows the automatic and remote measurement and monitoring of electricity consumption. It also provides important information for managing peak demand and power quality. Such information can be useful for both electric power companies and customers (e.g., households and building managers). Electric power companies can use the information to make intelligent decisions for the efficient operation of power plants. Customers can receive feedback about electricity price signals and projected monthly bill, which can support them make more informed decisions about their usage. According to the enormous benefits of AMI, much efforts have been made to promote AMI and to utilize the data collected by AMI. This research has two objectives. The first objective is to develop an AMI acceptance model. Despite many benefits of AMI, several obstacles to penetrating AMI still remain. Among the obstacles, this research focuses on information privacy concerns (IPC) and perceived bad electricity usage habits (PEUH) of households. The electricity usage data collected by AMI can disclose detailed information about the behaviors and activities of a particular household. This disclosure could make households feel surveilled and worry about invasion of privacy. Additionally, electric power companies have emphasized that households who have bad electricity usage habits such as using a large amount of electricity with irregular usage patterns are potential customers to enjoy benefits of AMI. However, the argument have not been investigated yet from household’s perspective. Therefore, this research examines the effect of IPC and PEUH on the acceptance of AMI. The second objective is to utilize electricity usage data for understanding household’s life and solving practical problems. For this objective, two cases studies are conducted in this research. The first case is to identify differences between perceived and actual electricity usages of households. Understanding the differences is important for developing effective strategies to promote customer engagement programs (e.g., demand response and electricity savings). The second case is to assess noise discomfort caused by electrical appliances between neighbors. In this case, a method that utilizes electricity usage data to assess noise discomfort levels caused by electrical appliances between neighboring households is proposed. This research provides the following contributions. For the first objective, the developed scales can be a starting point to further explore IPC and PEUH of households because existing studies have not considered the aspect. Second, the results of the AMI acceptance model are expected to practically provide insights to how and why IPC and PEUH influence acceptance of AMI. Additionally, the results can practically help electric power companies establish effective strategies for AMI penetration in households. For the second objective, the case I provides AMI data users practical information on how much households have different perceptions on their electricity consumption and if so, how they should utilize the data. The case II provides a method to assess noise discomfort between neighbors by utilizing electricity usage data. The method can help analysts make a model to identify noise discomfort caused by electrical appliances.
Smart Safety Living Lab is a living lab facility, constructed and operated by KITECH in Korea, to support the evaluation, improvement and certification of smart safety products and services. In this Living Lab, user experience (UX) is evaluated to enhance the user acceptance and market competitiveness of products and services. This study developed a UX evaluation methodology that accommodates the characteristics of the Living Lab and smart safety products and services in order to carry out the UX evaluation systematically and efficiently in the Smart Safety Living Lab. This thesis aims to develop a User Experience Evaluation Methodology for Smart Safety Living Lab (SSLL-UXEM). SSLL-UXEM is a guideline and toolkit that enable UX evaluators to conduct UX evaluation of products and services systematically and efficiently. The methodology consists of a structured process for UX evaluation, and also provides a guideline for conducting each step of the process and a set of forms for recording the major items in each step. The usefulness of the proposed methodology is shown via an expert evaluation and case studies. SSLL-UXEM is expected to improve the efficiency and the consistency of the UX evaluation results. From a practical perspective, this research aims to contribute to evaluating a smart safety products and services and providing a basis for UX evaluation in various living labs in the future.
A multi-stage manufacturing process (MMP) consists of several consecutive process stages, each of which has multiple machines performing the same functions in parallel. A manufacturing path (simply referred to as a path) is defined as an ordered set indicating a record of machines assigned to a product at each process stage of an MMP. An MMP usually produces products through various paths. In practice, multiple machines in a process stage have different operational performances, which accumulate during production and affect the quality of products. In addition, the performance of the machines gradually decreases over time due to their usage. Consequently, the quality of a product varies depending on its path. This research aims to develop methods to derive the paths that produce products whose quality is expected to exceed a pre-determined level. Such paths are referred to as the golden paths in this research. The derivation of the golden paths is based on the analysis of the production log data. The production log data contains information about the machines involved in the production of individual products at each process stage in the MMP, and it can be easily collected in a modern manufacturing environment. Thus, the utilization of the production log data allows the accessibility and usability of the methods developed in this research. To accomplish the objective, this research deals with three research issues. The first issue is to develop a method to derive the golden paths in the circumstance where there is no performance degradation of the machines. Two approaches (namely, product quality prediction model-based approach and machine sequence pattern-based approach) are developed for this issue. The second issue is to develop a method to derive the golden paths when the performance degradation of the machines exists and the degradation trend is deterministic. A health indicator (HI) of the machine is adopted to represent the current performance of the machines. The two developed approaches are extended to reflect the changed circumstance by using the HI of the machine. The third issue is to develop a method when the performance degradation of the machines is non-deterministic. An estimation of the HI of the machine in the future is required in this issue. This research provides the following contributions. First, this research provides a holistic perspective on product quality improvement by introducing the concept of path. It helps to consider variations in product quality due to performance degradation of the individual machines and inherent interactions between the process stages of the MMP. Second, this research provides the golden paths that contribute to increasing the production of superior-quality products and can be used as a benchmark for other paths in terms of quality improvement.
Recently, various products and services have been combined to satisfy customers’ needs. For this reason, differentiation of User Experience (UX) takes center stage rather than differentiation of function. UX is all aspects of user’s experience using a particular product, service, system and its company. Many UX practitioners and researchers try to evaluate UX. In this situation, UX experiment to evaluate UX enters into the picture. However, existing methodologies of UX experiment have several limitation. They can provide just partial support for designing UX experiment. In addition, most of them are developed to aim a particular product or service. Therefore, existing methodologies have limitations about versatility. This thesis aims to develop User Experience Experimental Design Tool (UXEDT). UXEDT is a versatile tool for supporting to design UX experiment. UXEDT is composed of 14 dimensions and 42 items that should be essentially determined to design UX experiment. Furthermore, UXEDT can be applied to UX experiment of general products, services, and systems not just for particular one. This thesis also contains the process of developing UXEDT that is composed of 5 phases. Versatility evaluation of UXEDT is also conducted. The versatility in this thesis means whether the dimensions and items of UXEDT are applicable to design UX experiment of expansive products, services, and systems. Two types of versatility evaluation utilize (1) case studies based on personally conducted UX experiment (2) case studies based on literature review. Consequentially, UXEDT is verified as a versatile tool to design UX experiment.
Smart wellness services support the wellness monitoring of individuals by utilizing smart devices (e.g., smartphones and smartwatches). Smart devices enable such services to collect various lifelogs (i.e., records of various health behaviors, such as activity, sleep, and diet). Existing smart wellness services utilize the lifelogs primarily to display a detailed record of each health behavior to users. Thus, existing smart wellness services have limitations in supporting the overall and easy understanding of multiple health behaviors. A lifelogs-based wellness index (LWI) can resolve such limitations. LWI calculates a wellness score from the lifelogs. The wellness score intuitively represents the overall level of multiple health behaviors and thus enables individuals to easily monitor the overall level. Such usefulness of LWI is expected to stimulate the development of new LWIs in smart wellness services. However, the existing index development process has limitations in supporting such development. This research addresses these limitations by creating a framework to develop LWI in smart wellness services. This research tackles three objectives. The first is to define the framework that consists of the factors and process needed to develop LWI. The factors are what need to be considered in LWI development. The process provides a step-by-step procedure of developing LWI. The second objective is to develop an LWI for college students (LWICS) by utilizing the framework. The LWICS development process includes not only collecting and analyzing four-week lifelogs from 41 students but also assessing the generalizability and utility of LWICS through a survey and an experiment, respectively. This LWICS development case shows that the framework works well in practice. The third objective is to recommend a guideline on missing data handling in LWI development. Lifelogs for LWI development are likely to have a large amount of missing data because they are collected from daily life. Missing data can lead to the development of an inaccurate LWI. The missing data handling guideline provided in this research suggests the most adequate method to develop an accurate LWI. The framework for LWI development provides a systematic method for new LWI development cases. It guides the “what” and “how” of LWI development that existing studies have rarely identified. If the LWICS developed in this research operates in a smart wellness service, it will effectively help college students easily monitor the overall level of their health behaviors. The missing data handling guideline presented in this research is expected to contribute to the development of accurate LWIs for new LWI development cases.
The condition of machines must be monitored in terms of quality and productivity and out-of-control observations should be detected before the machine’s condition non-normality leads to major damage such as breakdown or explosion. As the wide deployment of Information and Communication Technique and additional process monitoring systems with various sensors provide massive amounts of data online, such data proliferation provides opportunities for real-time continuous monitoring of a machine and identifying the major contributors. Despite the applicability of large data to detect out-of-control observation with control chart, few efforts have been made to establish a decomposition methodology based on such data. In particular, the unique nature of a clustering algorithm-based control chart is not reflected in the existing studies of decomposition because they do not focus on the characteristics of recent collected data. To fill the research gap, this research aimed to develop a decomposition methodology for a clustering algorithm-based control chart. The methodology included a procedure for decomposing the out-of-control observations that were collected from the clustering algorithm-based control chart. The decomposition methodology was devised based on insights gained from literature related to control chart, decomposition, and distance measurement. In addition, two case studies on decomposing out-of-control observations by utilizing simulation and vessel’s main engine data provided insight to devise existing methodology. In this context, the procedure of decomposition consisted of three steps: 1) out-of-control observation collection, 2) decomposition out-of-control observation, and 3) definition of significant major contributors. In particular, 2.1) fixation of centroid and 2.2) degree of non-normality were introduced to construct the proposed methodology. The proposed methodology presented a systematic design process for decomposing out-of-control observations collected from clustering algorithm-based control chart via the efficient analyses of condition data. In addition, it can create a synergistic effect if incorporated into brand new machine learning methodologies to conduct condition-based maintenance. From a theoretical perspective, this research shows the contribution extending the research area of Runger’s T2 decomposition by combining existing methodologies and characteristics of recent data. From a practical perspective, results of this study can support engineers in maintaining the condition of machines efficiently.
Various types and massive amounts of data are collected in the automotive industry. Such data proliferation facilitates and improves the design of new services for supporting driver’s vehicle operations. This research defines this type of service as a personal vehicle operations management (PVOM) service. A PVOM service helps drivers drive safely, conveniently, and pleasurably with the use of vehicle operational data. Despite the applicability of big data to PVOM service design, few efforts have been made to establish a design process for PVOM services based on such data. Existing methodologies for service design can be used as references for designing PVOM services with the data; however, their applicability is limited because they do not focus on the use of data for new service design. In addition, the unique nature of PVOM service design is not reflected in the existing studies because they do not focus on the design of PVOM services. To fill the research gap, this resesarch aims to develop a data-driven methodology to design PVOM service concepts. The methodology includes a procedure for PVOM service concept design by using PVOM-related data and two supporting tools for proceeding specific steps in the procedure. The procedure was devised based on insights gained from literature related to PVOM service concept design and three case studies on designing PVOM service concepts by utilizing PVOM-related data. The procedure consists of six steps : (1) PVOM service objective definition, (2) PVOM-related data preparation, (3) PVOM-related data analysis planning, (4) PVOM- related data analysis, (5) PVOM service content generation, and (6) PVOM service concept definition. The procedure aims at helping service designers understand driver’s vehicle operations through PVOM-related data analysis, and then design new PVOM service concepts efficiently on the basis of such understanding. The first tool, reference tables for PVOM-related data analysis planning, aims to help service designers devise various plans for PVOM-related data analysis. In particular, the tool helps service designers conduct the Step 3 in the proposed procedure. The second tool, morphological matrix for PVOM service content generation, aims at aiding service designers with the generation of various service contents to be delivered to drivers. Particularly, the tool helps service designers conduct the Step 5 in the proposed procedure. The procedure and the tools were tested and refined through a retrospective case study and laboratory experiments, respectively. The developed methodology presents a systematic design process for PVOM service concepts by the efficient analyses of PVOM-related data, and it may further create a synergistic effect if incorporated into the existing methodologies to designing new service concepts. From a theoretical perspective, this research aims to contribute to extending the research area of new service design by describing how to analyze and utilize data for designing new service concepts and providing a basis for data-driven service innovations in the current data-rich economy. From a practical perspective, this study aims to aid service designers in designing new PVOM service concepts by utilizing PVOM-related data.
A mobile location-based service (m-LBS) is a type of mobile service (m-service) that provides customized information or functions based on the locations of customers and their surrounding environment. M-LBS presents attractive business opportunities for various companies. Recent improvements of technologies have resulted in a dramatic growth of m-LBSs. The quality of m-LBS should be given careful consideration to increase customer satisfaction and enhance company competitiveness. However, development of scales for evaluating the m-LBS quality has scarcely been addressed. Although the existing studies on service quality have examined the common characteristics of m-services, they do not cover the distinctive characteristic of m-LBS that other m-services do not have. This research aims to develop quality scales applicable to evaluating two representative m-LBSs: map services and intermediation services. The framework of this research consists of two phase: design of m-LBS quality scales (Phase 1) and assessment of m-LBS quality scales (Phase 2). In Phase 1, the scales were qualitatively designed and the designed scales were quantitatively assessed with a total number of 281 responded survey data in Phase 2. As a result, the scale for map services, consisting of 9 quality dimensions and 31 measurement items, and the scale for intermediation services, consisting of 9 quality dimension and 29 measurement items, were developed. The developed scales could effectively help practitioners in understanding, evaluating, and improving the quality of their m-LBSs. This research would serve as a valuable complement to service quality literature by adding new quality scales in the literature.
A mobile location-based service (m-LBS) is a type of mobile service (m-service) that provides customized information or functions based on the locations of customers and their surrounding environment. M-LBS presents attractive business opportunities for various companies. Recent improvements of technologies have resulted in a dramatic growth of m-LBSs. The quality of m-LBS should be given careful consideration to increase customer satisfaction and enhance company competitiveness. However, development of scales for evaluating the m-LBS quality has scarcely been addressed. Although the existing studies on service quality have examined the common characteristics of m-services, they do not cover the distinctive characteristic of m-LBS that other m-services do not have. This research aims to develop quality scales applicable to evaluating two representative m-LBSs: map services and intermediation services. The framework of this research consists of two phase: design of m-LBS quality scales (Phase 1) and assessment of m-LBS quality scales (Phase 2). In Phase 1, the scales were qualitatively designed and the designed scales were quantitatively assessed with a total number of 281 responded survey data in Phase 2. As a result, the scale for map services, consisting of 9 quality dimensions and 31 measurement items, and the scale for intermediation services, consisting of 9 quality dimension and 29 measurement items, were developed. The developed scales could effectively help practitioners in understanding, evaluating, and improving the quality of their m-LBSs. This research would serve as a valuable complement to service quality literature by adding new quality scales in the literature..
Service blueprinting is a technique used to visualize the service delivery process from the customer’s point of view. The blueprinted service delivery process should show the essential aspects of the service in question because the picture serves as a basis to understand and analyze the service. This thesis proposes a framework to develop a service blueprinting template specialized for visualizing the essential aspects of a specific service. The service blueprinting template is a matrix board in which customer actions and the key resources of the service are placed in rows, and the key phases of the service delivery process are placed in columns. The proposed framework is generically applicable to any type of service. On the basis of the proposed framework, three new service blueprinting templates, namely, PSS Board, Information Service Blueprint, and Experience Service Blueprint, are developed for blueprinting product-service system (PSS), information-intensive service (IIS), and experience-centric service (ExS), respectively. Users can obtain a systematic understanding of the essential aspects and the value-creation mechanism of the PSS, IIS, or ExS in question by blueprinting the service based on the corresponding template. The proposed three templates could effectively aid practitioners in evaluating, improving, and designing PSSs, IISs, and ExSs. This thesis would serve as a valuable complement to the current service blueprinting research.
A mobile content service (MCS) is the service in which customers participate with their mobile devices in order to use, play or read some content that is provided by the service provider as an outcome and delivered to the customer’s mobile device. Examples of mobile content services include application markets, music/video streaming services, e-book services and information providing services. Recent improvements in the performance of mobile devices and wireless network have resulted in wide-spread of mobile device and dramatic growth of MCSs. The improved technology has enabled users to get and use high-quality mobile content easily, but the cost is still high. Small display and small storage of mobile devices also prohibit users from processing much information in short time and possessing much content in their devices. The restriction made customers require high service quality for MCS providers. MCS providers should be good at recommending proper content smartly and supporting customers’ convenient use as well as providing content. To improve the service quality of MCS, the service providers should understand what constitute the quality of MCS and how to measure the quality accurately. However, the works related to quality of MCS to date are not comprehensive enough to provide a guideline for understanding and evaluating the quality of MCSs. The objective of this research is to develop a quality scale applicable to evaluating two kinds of MCS: application market services and cultural content services. In that applications and cultural content are usually of high-volume and used multiple times by users, those two kinds of MCS are different from MCSs providing simple information that have been studied in existing works. The research consists of two phases. The first phase aims to construct the MCS quality scales. In the first phase, a literature review and a qualitative research are conducted to identify quality dimensions and generate measurement items for the scale. Then, an online survey was conducted to collect data in order to assess reliability of each quality dimension and identify the measurement items to be deleted to improve reliability. After the first data collection and analysis, the scale was modified. A new data sets was collected to assess the reliability of modified scale, and the scale showed fine reliability. The second phase aims to assess three kinds of validity for the scales: convergent validity, discriminant validity, and predictive validity. To assess convergent and discriminant validity, CFA was conducted and some indices were calculated. In assessing predictive validity, SEM technique was applied. As a result, a couple of measurement items were deleted as a refinement and the developed scale showed fine validities, finally. This research is expected to have a significant contribution in providing a valid scale to be used in understanding and measuring quality of MCSs. The developed scale contains new quality dimensions which have never been considered before and providing the characteristics of each quality dimension for better understanding of MCS quality with guidelines and insights for better applicability of the scale.
In multiresponse surface optimization (MRSO), responses are often in conflict. To obtain a satisfactory compromise, the preference information of a decision maker (DM) on the tradeoffs among the responses should be incorporated into the problem. In most existing works, including methods using desirability functions and loss functions, the DM expresses his/her subjective judgments on the responses through preference parameters. This approach would be useful in situations where the DM’s preference structure and the corresponding parameters can be easily found. However, extracting the correct preference parameter values representing the DM’s preference structure is often difficult. This research proposes the solution selection methods to MRSO. Each method takes a posterior preference articulation approach or an interactive preference articulation approach. The posterior approach initially finds a sets of nondominated solutions without the DM’s preference information, and then allows the DM to selects the best solution from among the nondominated solutions. The interactive approach progressively obtains the DM’s preference information and incorporates it into the process until a satisfactory solution is found for the DM. The proposed methods have an advantage in that they do not require the DM to specify the preference parameter values in advance while most existing MRSO works do. In the proposed methods, the DM only selects the most preferred solution from among candidate solutions or conducts pairwise comparisons. It is easier for the DM to selects a preferred solution than to specify precise values of the preference parameters. The proposed methods find a satisfactory compromise solution without imposing excessive cognitive effort on the DM.
Product-Service System (PSS) is an integrated combination of products and services designed to satisfy customers’ needs. PSS is getting important in that it provides economical and environmental benefits to both company and user. PSS idea means broad outline of the PSS business model to satisfy customers. PSS idea evaluation is necessary for deriving a successful PSS idea. This thesis aims to suggest a systematic methodology to evaluate the PSS idea, called a PSS idea evaluation system. The PSS idea evaluation system consists of assessment criteria and procedures to utilize the criteria. PSS evaluation criteria consist of the sustainability criteria and the customer-oriented criteria. The sustainability criteria are derived by adopting several indices developed by existing works related to sustainability index. For the customer-oriented criteria, PSS key success factors are employed, which is 1) enhancement of convenience, 2) customization of customer value delivery system, 3) creation of new functions, 4) reduced investing cost in the early stage as well as in total, and 5) improved level of the interaction between the PSS provider and customers. Case studies are conducted on three types of PSS. We conclude the suggestion assessment criteria are fitted well for the three types of PSS. It is expected that the suggested PSS idea evaluation system could support the development of a successful PSS idea.
To optimize the multistage manufacturing processes effectively, two features of those processes should be considered. One is that a multistage manufacturing process involves multiple stages to produce a complex product. Examples of such processes include automotive, LCD, PDP, semiconductor and steel manufacturing process. Another is that missing values are a common occurrence in the process data sets owing to several causes (e.g., data discarded by gross measurement errors, sampling inspection, and sensor failure). The main goal of this research is to develop methods for optimizing the multistage manufacturing processes based on data mining approach (In this study, Patient Rule Induction Method (shortly, PRIM), which is one of data mining approach, is adopted as a tool for optimizing the processes). This research consists of four issues (i) development of a method for optimizing a multistage process (say, multistage PRIM), (ii) enhancement of single stage PRIM for optimizing a single stage process with missing values (say, enhanced single stage PRIM), (iii) comparison of aggregation methods for obtaining one representative optimal box via a simulation experiment (this issue is related to (ii)), and (iv) validation of the proposed methods via a comprehensive case study. The contribution of this research is as follow. First, multistage PRIM has been developed. It extends the scope of process optimization from a single stage to a multistage process, and it can use the information encapsulated in relationship between stages when maximizing each stage’s performance. Second, enhanced single stage PRIM has been proposed. It can handle missing values systematically when optimizing a single stage process, and thus yields considerable improvements of the process. Third, a comprehensive case study is conducted using a multistage semiconductor process with missing values to test and evaluate the performance of the developed methods (i.e., multistage PRIM, enhanced single stage PRIM, and integrated application of multistage PRIM and enhanced single stage PRIM). This is the first work for optimizing a multistage manufacturing process based on data mining approach. With this research as a momentum, the succeeding studies for optimizing the multistage manufacturing processes are expected to be made.
Quality Functions Deployment (QFD) provides a specific approach for ensuring quality throughout each stage of the product development and production process. Since the focus of QFD is placed on the early stage of product development, the uncertainty in the input information of QFD is inevitable. If the uncertainty is neglected, the QFD decisions are likely to be misleading. To avoid misleading QFD decisions, the uncertainty itself or the effect of uncertainty on the decisions should be reduced. The reduction of uncertainty is very difficult and costly, if not impossible. On the other hand, the reduction of the effect of uncertainty is a realizable solution. The reduction of the effect means that the decisions are made in a robust manner so that the decisions remain relatively stable despite the given uncertainty. The objective of this research is to develop an extended QFD methodology, called ‘Robust QFD’, which is capable of considering the uncertainty of the input information and the resulting variability of the output. The proposed methodology aims to model, analyze, and dampen the effects of the uncertainty and the resulting variability of the QFD output in a systematic manner. In Robust QFD, the uncertainty of the input information is first modeled in a quantitative manner. Utilizing the modeled uncertainty, the variability is formally analyzed. Given the variability, the engineering characteristics (ECs) are prioritized. Finally, the robustness of the EC prioritization is evaluated and improved. This research consists of four issues (i) development of a framework for Robust QFD, (ii) development of robustness indices, (iii) development of robust prioritization method, and (iv) development of criticality index. This research is expected to have a significant contribution in the QFD literature. The Robust QFD methodology is the first attempt to provide a unified perspective for handling the uncertainty in a systematic manner. From the Robust QFD methodology, especially the Robust QFD framework, many future research issues can arise. This research is also expected to provide an effective support to the QFD practitioners. By providing meaningful insights on the uncertainty and robust decisions, this research would promote the effectiveness of QFD in the practice of new product development..
Quality functions deployment (QFD) has been used in order to improve customer value. One of the acknowledged advantages of QFD is the ability to promote organizational consensus building and decision making. Therefore, the QFD planning, defined as proactive “customer-driven planning” based on QFD information, has been studied for decision-makers (DMs) who determine how to achieve desired customer value. Available QFD planning models assume that effect on customer value is cross-sectional. However, there are cases of longitudinal effect, which means that customer value can be formed by the cumulative effect over a certain period of time, not the effect at a specific point of time. The objective of this research is to develop an extended QFD planning model with longitudinal effects considered. The proposed QFD planning model has two types according to the number of targets. The first type handles one target at the end of planning horizon (called ‘Single-Target model’), whereas the second type has a series of targets over the planning period (called ‘Sequential-Target Model’). This research first develops extended QFD planning models for single-target problem. Second, the proposed models are compared with the conventional QFD planning models by a set of simulation experiments. Third, the extended QFD planning models for sequential-target problem are proposed. Then also the comparison with the single-target model is performed with a set of simulation experiments. This research has a significant contribution in QFD planning model in that the proposed model takes the longitudinal effect into consideration. It is the first work to incorporate longitudinal effects in QFD planning. By considering the longitudinal effect explicitly, the QFD planning is extended to multi-period planning and has a planning horizon. As a result, the solution simultaneously provides both the optimal set of DRs and its selection timing over multi-period. Moreover, it is possible to utilize the entire planning horizon by including sequential targets. Due to these properties, the proposed model brings more realistic approach in QFD planning.
Process optimization is to determine the setting of process variables that optimizes quality characteristic of product in a manufacturing process. Here, the process variables and the quality characteristic are called input variables and a response, respectively. Response surface methodology (RSM), a collection of statistical and mathematical techniques useful for optimizing processes, consists of the experimental strategy for exploring the space of the input variables, empirical statistical modeling to develop an appropriate approximating relationship between the response and the input variables, and optimization methods for finding the values of the input variables that produce desirable value of the response. As the form of the true response functions is unknown in practice, we must approximate it. In fact, successful use of RSM is critically dependent upon the process engineer’s ability to develop a good empirical model for the true response functions. However, there are some occasions when a good empirical model is very difficult to build. For example, in the process industry, the design of experiments is very difficult to use on the spot. Instead, there is a large amount of operational data observed via various sensors. Unlike the experimental data, the observational data have some characteristic features that make it difficult to build the good empirical model. As the empirical model built varies from the true model, the resulting solution may be quite far from being optimal. An alternative approach is to find the optimum condition on input variables directly without an explicit model. This approach essentially relies on the observational data, and it can be called a data mining approach. One representative method employing this approach is the patient rule induction method (PRIM). PRIM seeks a set of sub-regions of the input variable space within which the performance of the response is considerably better than that of the entire input domain. However several applications of PRIM have been reported, there are some limitations in the existing PRIM for process optimization. The main goal of this research is to develop a rule induction method for process optimization under the situation that the design of experiments is very difficult to use on the spot, and there is a large amount of operational data observed via various sensors. The proposed approach directly solves the problem without the model building. To provide the process engineers with a realistic solution, three characteristic features of the manufacturing process are considered, i.e., considering multiple responses, proposing a new criterion for selecting an optimal box, and providing a systematic method for determining a nominal point. This research first develops a modified version of the existing PRIM for each feature. Then the validation of the proposed approach is performed with a comprehensive case study. The major contribution of this research is that it is the first work to develop a data mining approach to process optimization with multiple responses, and the proposed approach produces a more feasible and more practical solution than those of the present studies. Thus, with this research as a momentum, the succeeding studies on the data mining approach to process optimization are expected to be made, and therefore they are expected to provide the process engineers with a more realistic solution in practice.
Despite its brilliant success, Six Sigma has suffered from two shortcomings, namely, the lack of a systematic method to identify the right projects in the “Define” stage and to sustain the improvement in the “Control” stage. The integration of Six Sigma and Business Process Management(BPM) seems to be a promising way to overcome the shortcomings of Six Sigma. This thesis first reviews the existing efforts on this issue, and then proposes a framework for an effective integration of Six Sigma and BPM. The framework consists of five phases – DEFINE, EXECUTE, MONITOR, ANALYZE, and IMPROVE(DEMAI). This thesis also proposes a systematic method to identify process-oriented metric(POM), called POM. The identification of the right performance metric is a critical activity of the DEFINE phase. The POM method is composed of three phases – identification of the activities (phase I), evaluation of the attributes of the activities (phase II), and extraction of the metric cadidates (phase III). The method is demonstrated through a case study on a disposal process of idle facilities.
Multiresponse Optimization (MRO) problems often involve incommensurate and conflicting responses. To obtain a satisfactory compromise in such a case, a decision maker (DM)’s preference information on the tradeoffs among the responses should be incorporated into the problem. Most of the work in MRO, including the conventional desirability functions approach and loss functions approach, requires that all the preference information of the DM be articulated prior to solving the problem. However, it is difficult and impractical for the DM to specify all the required preference information in advance. The objective of this research is to develop an interactive approach based on the conventional desirability functions approach, called interactive desirability functions approach (IDFA), for MRO to overcome the common limitation of the existing work. More specifically, this research first develops three interactive desirability methods, which allow the DM to adjust just one of the preference parameters, namely, the shape, bound, target of the desirability functions, and then a unified interactive desirability functions method combining the three methods. This research also discusses the determination of the to-be-adjusted value of the preference parameter based on a preposterior analysis. Multiresponse Optimization (MRO) problems often involve incommensurate and conflicting responses. To obtain a satisfactory compromise in such a case, a decision maker (DM)’s preference information on the tradeoffs among the responses should be incorporated into the problem. Most of the work in MRO, including the conventional desirability functions approach and loss functions approach, requires that all the preference information of the DM be articulated prior to solving the problem. However, it is difficult and impractical for the DM to specify all the required preference information in advance. The objective of this research is to develop an interactive approach based on the conventional desirability functions approach, called interactive desirability functions approach (IDFA), for MRO to overcome the common limitation of the existing work. More specifically, this research first develops three interactive desirability methods, which allow the DM to adjust just one of the preference parameters, namely, the shape, bound, target of the desirability functions, and then a unified interactive desirability functions method combining the three methods. This research also discusses the determination of the to-be-adjusted value of the preference parameter based on a preposterior analysis. This research has a significant contribution in MRO in that the proposed IDFA facilitates the preference articulation process. The specific methods of IDFA, namely, shape-, bound-, target-based, and unified interactive desirability functions methods give flexible options in the preference articulation process. In particular, the unified method provides various channels through which the DM can articulate his/her preference information. Moreover, a preposterior analysis provides rich information about the tradeoffs among responses so that the DM can determine the to-be-adjusted value with more confidence based on a thorough understanding and evaluation. Consequently, the proposed IDFA would be highly effective in generating a compromise solution that is faithful to the DM’s preference structure.
Customer loyalty is a customer’s cumulative satisfaction with the service over time. Not only current customer satisfaction, but also past customer satisfaction can have a significant effect on customer loyalty. But, few re-searches investigate the significance of the effect of past customer satisfac-tion. This paper proposes an analysis framework to assess the significance of the effect of customer satisfaction on customer loyalty over time by using Time-Series methods, which are the transfer functions and the autoregressive distributed lag model. A case study results for high speed internet services are also be discussed.
Quality Functions Deployment (QFD) is a concept and mechanism for translating the “voice of the customer” through the various design stages into a final product. The house of quality (HOQ) chart, a principal tool for QFD, contains various kinds of information including customer attributes (CAs), engineering characteristics (ECs), the relationship between CAs and ECs and the correlation among ECs. In an HOQ, it is important that grasping how much a CA is explained by the given set of ECs. If a CA is explained sufficiently by the given ECs, the customer perception on the CA can be improved just by improving the product performance on the given ECs. If not, new ECs affecting the CA should be identified and improved instead of improving the existing ECs. This paper develops a systematic procedure to evaluate “CA Coverage” index, the degree to which a CA is explained by the EC set in a given HOQ. The proposed method has been applied to an HOQ on the ADSL service as an illustrative example.
Quality is considered one of the most important factors to be successful in the market place. Kano has proposed a quality model where quality is classified into quality elements: one-dimensional quality, must-be quality, attractive quality, and so on. There are a few methods for classifying the customer requirements into the Kano’s quality element. In particular, Kano method is widely known to many researchers. In the Kano method, Kano questionnaire and Kano evaluation tables are used. Customer answer in the Kano questionnaire is assigned to a quality element through the Kano evaluation tables. Customer answer is inherently ambiguous and uncertain. However, Kano method handles the customer answer such as precise data. This causes the vague relationship between customer answer and quality element in the Kano evaluation tables. To complement the limitation of Kano method, we introduced the fuzzy theory. This paper describes how customer requirements under fuzzy environment can be classified as quality elements, and describes an application to ADSL service.
The main goal of the research is to develop an integrated monitoring and diagnosis methodology for batch processes. The research has three major modules: monitoring, diagnosis, and their integration. For each of the modules, an on-line methodology for batch processes is developed, which can be used on-line when a batch run is being operated. The performance of each module is demonstrated using a PVC batch process. In this research, a data-driven approach is consistently adopted to build empirical models in each module, which need only process data to accomplish the unique tasks of the three modules. In the monitoring module, a predictive monitoring method is developed. One major problem in the on-line use of batch data is that the data are not complete until the end of the batch operation. The unmeasured portion of the data, called future observations, has to be predicted, which has a great effect on the monitoring performance. To accomplish the predictive monitoring, a new method for predicting the future observations is proposed. The proposed method, unlike the existing prediction methods, makes an extensive use of the past batch trajectories. The proposed method is able to detect the abnormalities of the new batch earlier and thus is more effective in predictive monitoring. In the diagnosis module, a new FDA-based diagnosis method is proposed to identify an assignable cause of a fault. An empirical model, called a discriminant model, is constructed using various past batch runs. The behavior of the new batch is referenced against the discriminant model to make a diagnostic decision. However, the FDA-based diagnosis method has a major limitation: it does not work when the fault batches available for building the discriminant model are insufficient. This is a highly critical issue in practice because sufficient fault batches are likely to be unavailable. A modified FDA-based diagnosis method is also proposed in order to handle the data insufficiency problem. It has been shown that the modified FDA-based diagnosis method is able to handle the data insufficiency problem, and yields reliable diagnosis performance. The monitoring and diagnosis tasks are the operational tasks which are most closely related and highly inter-dependent. The research and development in each of the two tasks are relatively active, but little has been done to establish a method for coordinating them. This research proposes an on-line method for performing information sharing between monitoring and diagnosis in batch processes. The proposed method utilizes an index, called a variable influence (VI) index, to yield the reduced cause candidates that are transferred to the diagnosis module. By comparing the off-line VI index for fault data with the on-line VI index of a new batch, the proposed method makes it possible to narrow down cause candidates of a fault. The use of the proposed framework as a preprocessing step of the diagnosis module has significantly improved the diagnosis performance at the onset of a fault. The contribution of this research is as follows: first, an on-line prediction method for batch processes has been developed based on the fault library. Due to a capability of approximating abnormal behaviors of a new batch, the prediction method is expected to be applied successfully in cases where the prediction of future observations is required. Second, an on-line FDA-based diagnosis method for batch processes has been developed, which is the first attempt to develop a systematic data-driven diagnosis method. As such, it is expected that the proposed diagnosis framework and model serve as a platform for developing more comprehensive research issues. As an extension of the FDA-based diagnosis method, the modified FDA-based diagnosis method using a pseudo batch was proposed to handle data insufficiency problem. Third, a data-driven information sharing framework for coordinating the on-line monitoring and diagnosis modules has been developed for batch process to enhance the diagnosis performance. This is the first work to link on-line monitoring and diagnosis of batch processes in a data-driven context.
Recently, with developing of service industry, the factors related with service quality are emphasized for the survival of enterprises. To improve service quality perceived by customers, it is necessary to analyze causal relationship between service characteristics provided by enterprises and service quality. In this analysis, service quality is measured by various dimensions and service characteristics are measured by various indexes. When the time lag between service characteristics indexes and service quality dimensions existed, I analyzed the causal relationship between the two considering the time lag. For that, I extracted factors for service characteristics indexes and service quality dimensions through factor analysis, and found the time lag between factors of service characteristics and factors of service quality through transfer functions model. Considering these information of time lag, I analyzed causal relationship between service characteristics indexes and service quality dimensions by using structural equation modeling. I also performed simulations under various experimental conditions to prove validity of the proposed model. And I compared the proposed model with other two alternative models to prove efficiency of the proposed model.
Mass customization is paradox-breaking manufacturing paradigm that aims cost efficiency of mass production and customized products of craft manufacturing at the same time. Several methodologies including product architecture, modular design and product platform have been suggested to achieve mass customization. But the existing methodologies have the common limitation in that they were developed solely from the mass production perspective. This research suggests a new kind of product development methodology for mass customization. The notion of product platform is employed as a basis of the suggested methodology in achieving mass customization. Two types of indices, namely, similarity index and sensitivity index, are proposed to evaluate the physical elements of a product from the ‘mass’ and ‘customization’ perspectives, respectively, and then finally make a product platform formation decision. A case study is presented to validate the effectiveness of suggested methodology. This research is the first step of balanced product development methodology for mass customization and help to develop design process that is robust to the trend of customers.
A common problem encountered in product or process design is the selection of optimal parameters that involves simultaneous consideration of multiple response characteristics, called a multiple response surface problem. There are several approaches proposed for multiple response surface optimization(MRO), including the desirability functions approach and loss functions approach. The existing MRO approaches require that all the preference information of the decision maker(DM) be extracted prior to solving the problem. However, the prior preference articulation is difficult to implement in practice. This thesis proposes to use an interactive optimization approach to the MRO problem to overcome the common limitations of the existing approaches. In particular, we demonstrate the use of the Step Method(STEM), one of the well-known interactive optimization methods in the multiple objective optimization literature, in solving MRO problems. A modified version of STEM is developed to enhance the feasibility and solvability of the method in dealing with the MRO problem. The characteristics of the proposed method will also be discussed in conjunction with the existing MRO approaches and STEM.
The Critical To Quality (CTQ) plays a very important role in Six Sigma quality program. However, there has been lack of research on the systematic selection of CTQ. There are various components and events that should be taken into account in the CTQ selection procedure. In this thesis, we first define all the components and events related to the CTQ selection process, which we call “Components,” “Activities”, and “Relations.” We then build an analysis model of the CTQ selection procedure, called a CAR model using Components, Activities, and Relations. We represent the CAR model using IDEF 0 and IDEF 3 diagram. The CAR model should be very useful in evaluating and devising an improved CTQ selection model. We also evaluate and improve several existing CTQ selection models using the proposed CAR model.
These days consumers consider the image and impression factors of a product as well as performance factors when they purchase a product in the marketplace. Therefore image and impression dimensions are the points to be duly considered in product design steps. To apply usability dimensions in product design steps, we can use the relationship model which describes quantitative relationship between usability dimension and human interface elements (HIEs). However, considering usability dimension is the difficult process in product design because the relationship models can only provide the results of evaluation about usability dimensions of the existing products. So it is necessary to generate the design specifications with consideration of the usability dimensions. The objective of this paper is to determine the optimal HIE levels which make the evaluation of audio/video product close to satisfaction level of customers or product developers and simultaneously the difference between their evaluations minimized about a single usability dimension and multiple usability dimensions. That is, the mean of evaluation is close to the target and simultaneously the standard deviation of evaluation is minimized. In this paper, the framework which can find the optimal HIE levels about the usability dimensions using the relationship models of usability dimension and the response methods is proposed. For determining the optimal HIE levels about a single usability dimension, we apply to the dual response surface method using the fuzzy modeling approach that makes the minimum satisfaction of mean and standard deviation maximize. These optimal HIE levels provide the product specification considering the specific usability dimension. Because the consumers consider multiple usability dimensions rather than a single usability dimension in purchasing products, we should determine the optimal product HIE levels about multiple usability dimension. In this paper, we propose the method of determination the optimal HIE levels for optimizing the multiple usability dimensions. To find the optimal HIE levels, we use the multiple response surface method that makes overall satisfaction maximized using the objective functions which converts the individual satisfaction value about each response to overall satisfaction value. These optimal HIE levels provide the design value with reflecting the optimized multiple usability dimensions.It is expected that the framework suggested in this paper can provide the optimal HIE levels which make the mean of evaluation close to the target and the standard deviation of evaluation minimized. These optimal HIE levels can be used as the guideline of product design with consideration of usability dimension. And these methods can extend other consumer’s product design for considering the concepts of usability dimensions with optimal design specifications.
The focus of traditional measurement models for service quality, such as SERVQUAL and SERVPERF, has been concentrated on service quality itself.Through these measurement models which does not consider customer behavior as their constructs, we could not attain the insights on the relationship between service quality and managerial outcomes. The purpose of this study is to propose a measurement model considering customer behavior as well as service quality, and to develop an analysis procedure for the measurement model. The proposed measurement model is a mixture of SERVQUAL model and ACSI model. In addition, survey items for the proposed model and an analysis procedure are shown for internet shopping malls’ business.
Classical SPC methods concentrate on dealing with assignable causes of a process only after the process has gone out-of-control. This thesis presents a qualitative representation framework for capturing important information in process data. Since nonrandom patterns in control charts can provide useful information about the process. The proposed method aims to detect latent patterns of the critical process variable associated with a given product quality characteristic through pattern recognition analysis. The proposed method consists of a combination of syntactical representation process and statistical classification process. In this paper, process data are represented as following steps: (i) preprocess process data, (ii) extract features from process data and identify their pattern bases, (iii) classify process pattern bases, and (iv) assign labels to process patterns. The proposed method is demonstrated through a shadow mask manufacturing process.
Concurrent engineering (CE) is a new product development concept which aims to integrate the experties from various functional disciplines during the design phase. Quality functions deployment (QFD) has played a major role in CE. The basic idea of QFD is to translate the customers’ desires into design or engineering characteristics. Although many successful applications of QFD have been reported worldwide, designers face impediments to the adoption of QFD as a product design aid. One of the difficulties associated with the application of QFD is the large size of a house of quality (HOQ) chart, which is the principal tool for QFD. It is well-known that it becomes more difficult and inefficient to manage a design project as the problem size becomes larger. Another difficulty with QFD is how to consider the complicated correlations among engineering characteristics (ECs) given in the roof of the HOQ chart. This research proposes to develop formal approahces to reducing the complexity of a design problem in CE. More specifically, if focuses on reducing the complexity of an HOQ chart. Two approaches are proposed: decomposition and restructuring. The decomposition approach attempts to partition an HOQ chart into several smaller sub-HOQ charts which can be solved efficiently and independently. The restructuring approach aims to restructure the ECs of a given HOQ chart to create a new HOQ chart which is smaller in size and simpler with respect to the correlation structure. Other relevant issues in QFD such as consistency check, choice of weighting scale, and normalization are also discussed in this research. By decomposing a large HOQ chart into smaller sub-HOQ charts or restructuring a large HOQ chart into a smaller one, the design team not only can enhance the concurrency of the design asctivities, but also reduce the amount of time, effort, and cognitive burden required fo the anlaysis. This would help to obviate the objections to the adoption of QFD as a product design aid and improve the efficiency of its use in practice.
The dual response approach based on a response surface methodology framework has been proven to be a powerful tool in modern quality control society. So far, the fuzzy modeling approach to optimize the dual response system offers many advantages over existing methods. This thesis presents a new approach, using a Proposed Membership Functions, which plays an important role in fuzzy modeling approach. With new membership functions, sensitivity analysis of the final results can be easily performed. Two examples are given to demonstrate how sensitivity anlaysis could be practically implemented. The simulation results show the efficiency of the Proposed Membership Functions and how the decision maker could benefit from it.
Taguchi parameter design is used extensively in industry to determine the optimal set of process parameters necessary to produce a product that meets or exceeds customer expectations of performance level while minimizing performance variation. The majority of research in Taguchi parameter design has concentrated on approaches to optimize process parameters based on experimental observation of a single quality characteristics. This paper develops a statistical method, the DMT method, to evaluate and optimize multiple quliaty characteristic problems. The method incorporates desirability functions, a performance statistic based on the Mean Squared Error, and data-driven transformations to provide a systematic approach that is adjustable to a variety of situations and easy for the non-expert to apply. This paper presents the DMT method in a step-by-step format and applies the method to two examples from the Taguchi parameter design literature.
Global competition and rising environmental costs have raised concern for the survival of the domestic cast metal industry. Industry leaders have identified the development of environmental technologies to reduce air emissions as the top research priority. This research provides a foundation for this effort by developing the basic model components for determing the feasibility of a new binder system with reduced emission characteristics. Three issues are explored to achieve this goal: process based emission models, competitive technical alternatives to a new environmental binder, valuing emission reduction in a way that evaluates product feasibility. Since emission models are essential tools to support binder product development, the research first evaluates the accuracy of emission rate indices, the process characteristics that influence emissions, and develops a process based emission model. Core presence, sand to metal ratio, and sea coal content are identified as significant process parameters to influence either VOC or benzene emissions. Second, binder product development requires a definition of the costs of feasible alternatives to a new binder. This research identifies the competitive technological alternatives to a new binder and develops cost models for cast metal industry applications of emission control equipment. A product space (mapping) model examines the conditions under which a new binder may be a competitive product compared to control technology. The research identifies incineration technology as the primary competition to a new binder. However it concludes that present techniques to capture emissions are inefficinet resulting in excessive ambient air that drives incineration costs to non competitive levels. Finally, the reseach develops a technique to assing a monetary value to emission reduction and uses this information in a binary logit choice model with risk aversion based weights to estimate the likelihood of the commercial success of a new binder. For a given combination of binder performance characteristics, a probability of new binder selection can be estimated. The choice model indicated that a new inder concept may have a probability of decision cmaker choice in excess of 70% even with a negative impact in production costs. The research concludes that development of a new environmental binder system is a feasible project: the emission models to support new binder product development can be developed; a new binder can be competitive compared to control technology options; the choice model predicts that decision makers will selects a new binder given specific performance characteristics and competitive circumstances. The research recommends that the industry work cooperatively to develop a common emission data base and that the technical issues of emission capture efficiency deserve additional study. The research extends the binder feasibility model to address the issues of advanced technology (AT) product development. The methodology emplyed to value new binder emission redution is applied to the general problem of valuing the intangible benefits of advanced technologies.
Quality functions deployment (QFD) has been successfully applied to many industries, which has allowed them to precisely translate the customers’ needs into detailed requirements for product development. A major difficulty in applying QFD is the large size of the house of quality (HOQ) matrix. A concept which is useful in solving this kind of matrix is the decomposition of a large HOQ matrix into several submatrices which are more manageable in size. Many algorithms have been developed for decomposing the large machine-component matrix in group technology (GT). Most practical HOQ matrices, however, are usually nonbinary and nondiagonal. There exist no systematic algorithms for decomposing such matrices. This thesis develops two approaches is to optimize the sum of entries in the submatrices. Approach I decomposes a nonbinary nondiagonal matrix according to the user’s assigned ranges of the number of decomposed rows and columns defining each submatrix. Approach II decomposes such a matrix based upon the user’s assinged number of submatrices to be formed.
Quality Functions Deployment (QFD) is a conceptual methodology that attempts to convert the voice of the customer into the necessary design requirements for each stage of a product’s planning cycle. Applications of QFD in the manufacturing industry have been vastly growing since the 1980s. With the increasing competitiveness between service industires, there exists a need to also determine the design requirements within the industry that will maximize customer satisfaction. The following thesis will address the issues involved when applying QFD thto service environment. To fully understand the difference in applying QFD in the manufacturing environment and the service environment, the definition of quality in each of these types of industries is discussed. The characteristics of QFD are explained in detail followed by the problems that exist when applying this methodology to service industries. A review of recent service industrial examples and recommendations for addressing the problematic areas are given. When utilizing QFD, one must complete a set of matrices of relationships that is known as the House of Quality (HOQ). Moreover, the optimization technique, linear programming, is recommended to utilize the HOQ in determining the targeted design requirements that will maximize customer satisfaction. To illustrate and validate the recommended procedure, a pharmacy example is given and discussed.
Not available.
Not available.
A mathemetical model was constructed to model the life cycle costs of new products. The fundamental conceptual framework was to integrate unit cost, experience, product life cycle, and learning curves over the life of the project. For the analysis of the model, six factors where investigaed: saturation level K; delay factor A; inflection point I; unit cost polynomial M; learning factor L; and experience factor E. A two level six factor factorial experiment design was used to help determine the overall validity of the model under realistic conditions. Results of the analysis suggested that every factor used in the model had a statistical and practical significance on the costs of the project. Therefore, all six factors should be considered by the analyst when making a decision to manufacture a new product.
Nonparametric linear regression and fuzzy linear regression have been developed from different assumption and conceptions. Their methodologies are different from each other, and therefore their estimates of an unknown model are different. In this thesis, their characteristics, such as basic assumptions, parameter estimations, and applications are described and compared. Their performances are also evaluated by a simulation experiment to identify the conditions under which one method performs better than the other. It turns out that nonparametric regression is superior to fuzzy regression in predictive capability, whereas their descriptive performances depend on various factors. When the size of a dat set is small, error terms have small variabiltiy, or when the relationships among variables are not well specified, fuzzy linear regression outperforms nonparametric linear regression in terms of their descriptive capability. The conditions under which each method can be used as a viable alternative to the conventional least square regression are also identified.