top of page

Group

Public·5 members

Wesley Gomez
Wesley Gomez

[FULL] Engineering Probability And Statistics D.k Murugesan: A Comprehensive Guide for Engineers and Mathematicians



Engineering Probability And Statistics D.k Murugesan: A Comprehensive Guide




If you are an engineering student or a professional engineer, you probably know how important it is to have a solid foundation in probability and statistics. These two disciplines are essential for solving complex engineering problems, analyzing data, designing experiments, and making decisions under uncertainty. But how can you master these topics in a systematic and effective way?




[FULL] Engineering Probability And Statistics D.k Murugesan



One of the best resources available for learning engineering probability and statistics is the book by D.k Murugesan. He is a renowned professor of mathematics and statistics at Anna University in India. He has over 40 years of teaching and research experience in various fields of engineering and applied mathematics. He has authored several books and papers on probability, statistics, operations research, numerical methods, and fuzzy logic.


In this article, we will provide you with a comprehensive guide on engineering probability and statistics D.k Murugesan. We will explain what engineering probability and statistics are, why they are important, and how they are applied in different engineering domains. We will also review the book by D.k Murugesan and highlight its strengths and weaknesses. By the end of this article, you will have a clear understanding of engineering probability and statistics D.k Murugesan and how you can use it to enhance your engineering skills.


Basic Concepts of Engineering Probability And Statistics




Before we dive into the applications of engineering probability and statistics, let us first review some of the basic concepts that underlie these disciplines. These concepts include probability, random variables, and statistics.


Probability




Probability is a measure of how likely an event is to occur. It can be expressed as a number between 0 and 1, where 0 means impossible and 1 means certain. For example, the probability of rolling a six on a fair die is 1/6 or 0.167.


There are three axioms that define probability:



  • Axiom 1: The probability of any event is a non-negative number.



  • Axiom 2: The probability of the sample space (the set of all possible outcomes) is 1.



  • Axiom 3: The probability of the union of mutually exclusive events (events that cannot occur together) is equal to the sum of their probabilities.



Based on these axioms, we can derive various rules for calculating probabilities. Some of these rules are:



  • Rule 1: The probability of the complement of an event (the event not occurring) is equal to 1 minus the probability of the event.



  • Rule 2: The probability of the intersection of two events (both events occurring) is equal to the product of their probabilities if they are independent (the occurrence of one event does not affect the other).



  • Rule 3: The probability of the union of two events (either event or both occurring) is equal to the sum of their probabilities minus the probability of their intersection.



For example, suppose we toss a coin twice. The sample space is HH, HT, TH, TT, where H stands for heads and T stands for tails. The probability of each outcome is 1/4 or 0.25. The probability of getting at least one head is equal to the probability of the union of HH, HT, TH, which is 0.25 + 0.25 + 0.25 - 0.25 = 0.75.


Random Variables




A random variable is a variable that takes on different values depending on the outcome of a random experiment. For example, if we toss a coin twice, we can define a random variable X as the number of heads obtained. X can take on values 0, 1, or 2 depending on whether we get TT, HT or TH, or HH.


There are two types of random variables: discrete and continuous. A discrete random variable can take on only a finite or countable number of values, such as X in the previous example. A continuous random variable can take on any value in a given interval, such as Y, the height of a person.


Each random variable has a distribution function that describes how likely it is to take on a certain value. For discrete random variables, this function is called the probability mass function (PMF), and for continuous random variables, it is called the probability density function (PDF). The PMF or PDF can be represented by a table, a formula, or a graph.


For example, suppose we roll a fair die once. We can define a discrete random variable Z as the number shown on the die. Z can take on values 1, 2, 3, 4, 5, or 6 with equal probability. The PMF of Z is given by:


z P(Z = z) --- --- 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6 The graph of the PMF looks like this:



Suppose we measure the weight of a randomly selected person in kilograms. We can define a continuous random variable W as the weight of the person. W can take on any value between 0 and infinity. The PDF of W is given by some function f(w), such that:



  • f(w) 0 for all w



  • The area under the curve f(w) from w = a to w = b is equal to the probability that W lies between a and b



  • The total area under the curve f(w) from w = - to w = is equal to 1



For example, suppose W follows a normal distribution with mean μ = 70 and standard deviation σ = 10. The PDF of W is given by:


$$f(w) = \frac1\sqrt2\pi\sigma e^-\frac(w-\mu)^22\sigma^2$$ The graph of the PDF looks like this:



Statistics




Statistics is the science of collecting, organizing, analyzing, and interpreting data. There are two main branches of statistics: descriptive and inferential.


Descriptive statistics summarizes and displays data using measures such as mean, median, mode, standard deviation, range, frequency, percentage, histogram, bar chart, pie chart, etc. For example, suppose we have a data set of five test scores: 80, 85, 90, 95, 100. Some descriptive statistics for this data set are:



5 = 90


  • Median: The middle value when the data is arranged in ascending order, which is 90 in this case



  • Mode: The most frequent value in the data, which is none in this case



  • Standard deviation: A measure of how much the data varies from the mean, which is 7.07 in this case



  • Range: The difference between the maximum and minimum values in the data, which is 20 in this case



  • Frequency: The number of times each value appears in the data, which is 1 for each value in this case



  • Percentage: The ratio of a part to the whole expressed as a fraction of 100, which is 20% for each value in this case



  • Histogram: A graphical representation of the frequency distribution of the data using bars of equal width, which looks like this:




Inferential statistics draws conclusions and makes predictions based on data using methods such as hypothesis testing, confidence intervals, regression, correlation, ANOVA, etc. For example, suppose we want to test whether the mean test score of a class is different from 80. We can use a hypothesis test to do so. A hypothesis test involves four steps:



  • State the null and alternative hypotheses. The null hypothesis is the statement that we assume to be true unless there is strong evidence against it. The alternative hypothesis is the statement that we want to test. In this case, the null hypothesis is H0: μ = 80 and the alternative hypothesis is Ha: μ 80.



  • Choose a significance level. The significance level is the probability of rejecting the null hypothesis when it is true. It is usually denoted by α and chosen to be a small value such as 0.05 or 0.01. In this case, let us choose α = 0.05.



  • Calculate the test statistic and p-value. The test statistic is a measure of how far the sample mean is from the hypothesized mean under the null hypothesis. The p-value is the probability of obtaining a test statistic as extreme or more extreme than the observed one under the null hypothesis. In this case, we can use a t-test to calculate the test statistic and p-value. The formula for the t-test statistic is:



$$t = \frac\barx - \mu_0s / \sqrtn$$ where $\barx$ is the sample mean, $\mu_0$ is the hypothesized mean, s is the sample standard deviation, and n is the sample size. Plugging in the values from our data set, we get:


$$t = \frac90 - 807.07 / \sqrt5 = 3.16$$ The p-value can be obtained from a table or a calculator. In this case, using a calculator, we get p = 0.026.



  • Make a decision and interpret the results. We compare the p-value with the significance level and decide whether to reject or fail to reject the null hypothesis. If p α, we reject H0 and accept Ha. If p > α, we fail to reject H0 and do not accept Ha. In this case, since p = 0.026 < α = 0.05, we reject H0 and accept Ha. This means that we have enough evidence to conclude that the mean test score of the class is different from 80.



Applications of Engineering Probability And Statistics




Now that we have reviewed some of the basic concepts of engineering probability and statistics, let us see how they are applied in different engineering domains. Some of these domains are reliability engineering, quality control, and design of experiments.


Reliability Engineering




Reliability engineering is the branch of engineering that deals with the design, analysis, and improvement of systems and components that perform their intended functions without failure for a specified period of time under specified conditions. Reliability engineering uses probability and statistics to model and quantify failure phenomena such as wear-out, fatigue, corrosion, creep, etc., and to estimate reliability measures such as failure rate, mean time to failure (MTTF), mean time between failures (MTBF), availability, etc.


For example, suppose we have a system that consists of three components connected in series, as shown below:



The system works only if all three components work. The failure rates of the components are λ1 = 0.01, λ2 = 0.02, and λ3 = 0.03 per hour, respectively. The failure rate of the system is equal to the sum of the failure rates of the components, which is λ = 0.06 per hour. The MTTF of the system is equal to the reciprocal of the failure rate, which is MTTF = 1 / λ = 16.67 hours.


Quality Control




Quality control is the branch of engineering that deals with the monitoring and improvement of the quality of products and processes using statistical methods. Quality control uses probability and statistics to measure and control variation, detect defects, reduce waste, and ensure customer satisfaction. Some of the tools and techniques used in quality control are control charts, acceptance sampling, Pareto analysis, cause-and-effect diagrams, etc.


For example, suppose we have a process that produces light bulbs. We want to monitor the variation in the lifetime of the light bulbs using a control chart. A control chart is a graphical tool that plots the sample statistics (such as mean or range) of a quality characteristic (such as lifetime) over time and compares them with predetermined control limits. If the sample statistics fall within the control limits, the process is said to be in control. If they fall outside the control limits, the process is said to be out of control and needs investigation.


Suppose we take samples of size n = 5 every hour and measure their lifetimes in hours. The mean lifetime of each sample is denoted by $\barx$. The mean and standard deviation of $\barx$ are denoted by $\bar\barx$ and s, respectively. The control limits for $\barx$ are given by:


$$\bar\barx \pm 3 \fracs\sqrtn$$ Suppose we have collected 20 samples and calculated their means and ranges. The table below shows the data:


Sample $\barx$ R --- --- --- 1 1000 50 2 1010 40 3 1020 60 4 990 30 5 980 20 6 970 10 7 960 40 8 950 50 9 940 60 10 930 70 11 920 80 12 910 90 13 900 100 14 890 110 15 880 120 16 870 130 17 860 140 18 850 150 19 840 160 20 830 170 The mean and standard deviation of $\barx$ are calculated as:


$$\bar\barx = \frac120 \sum_i=1^20 \barx_i = 935$$ $$s = \sqrt\frac119 \sum_i=1^20 (\barx_i - \bar\barx)^2 = 54.77$$ The control limits for $\barx$ are calculated as:


$$\bar\barx \pm 3 \fracs\sqrtn = 935 \pm 3 \frac54.77\sqrt5 = (835.42, 1034.58)$$ The control chart for $\barx$ looks like this:



From the control chart, we can see that all the sample means fall within the control limits, except for the last one. This indicates that the process is in control for most of the time, but there is an out-of-control signal at the end. This could be due to a special cause of variation, such as a change in the raw material, the machine, or the operator. We need to investigate and eliminate the cause of this variation and bring the process back to control.


Design of Experiments




Design of experiments is the branch of engineering that deals with the planning, conducting, analyzing, and interpreting of experiments using statistical methods. Design of experiments uses probability and statistics to optimize the experimental conditions, minimize the experimental errors, maximize the information obtained, and draw valid and reliable conclusions. Some of the tools and techniques used in design of experiments are factorial design, response surface methodology, Taguchi method, etc.


For example, suppose we want to study the effect of three factors (temperature, pressure, and catalyst) on the yield of a chemical reaction. We can use a factorial design to do so. A factorial design is a type of experimental design that involves varying all the factors at two or more levels and observing their effects and interactions on the response variable. A factorial design can be represented by a notation such as 2^3 or 3^2, where the exponent indicates the number of factors and the base indicates the number of levels.


In this case, we can use a 2^3 factorial design, where each factor has two levels: low (-) and high (+). The table below shows the design matrix and the corresponding yield values:


Temperature Pressure Catalyst Yield --- --- --- --- - - - 50 - - + 55 - + - 60 - + + 65 + - - 70 + - + 75 + + - 80 + + + 85 The graph below shows the main effects and interactions plots for this design:



From the main effects plot, we can see that all three factors have a positive effect on the yield, meaning that increasing their levels increases the yield. The magnitude of the effect can be measured by the slope of the line. The steeper the line, the larger the effect. In this case, temperature has the largest effect, followed by pressure and catalyst.


From the interactions plot, we can see that there are no significant interactions between any pair of factors, meaning that their effects are independent of each other. The absence of interaction can be indicated by parallel lines. If there were interactions, we would see crossing or non-parallel lines.


Engineering Probability And Statistics D.k Murugesan Book Review




Now that we have seen some of the applications of engineering probability and statistics, let us review the book by D.k Murugesan that covers these topics in detail. The book is titled "Engineering Probability And Statistics" and was published by S. Chand Publishing in 2010. The book has 14 chapters and 624 pages. The book is intended for undergraduate and postgraduate students of engineering and applied mathematics, as well as for practicing engineers and researchers.


Overview




The book provides a comprehensive and rigorous treatment of engineering probability and statistics, with an emphasis on concepts, methods, and applications. The book covers both discrete and continuous probability distributions, random variables, statistics, reliability engineering, quality control, design of experiments, and more. The book is organized as follows:



  • Chapter 1: Introduction to Probability And Statistics: This chapter introduces the basic concepts and terminology of probability and statistics, such as sample space, events, probability axioms and rules, random variables, distribution functions, etc.



  • Chapter 2: Discrete Probability Distributions: This chapter discusses some of the common discrete probability distributions, such as binomial, Poisson, geometric, negative binomial, hypergeometric, etc., and their properties and applications.



  • Chapter 3: Continuous Probability Distributions: This chapter discusses some of the common continuous probability distributions, such as uniform, exponential, normal, gamma, beta, etc., and their properties and applications.



  • Chapter 4: Functions of Random Variables: This chapter explains how to find the distribution function of a function of one or more random variables, such as linear transformation, sum, product, quotient, etc., using methods such as transformation technique, convolution theorem, moment generating function, etc.



multinomial distribution, etc., and their properties and applications.


  • Chapter 6: Descriptive Statistics: This chapter reviews some of the descriptive statistics that are used to summarize and display data, such as mean, median, mode, standard deviation, range, frequency, percentage, histogram, bar chart, pie chart, etc.



  • Chapter 7: Sampling Distributions And Estimation: This chapter explains how to use sample data to make inferences about population parameters, such as mean, variance, proportion, etc., using methods such as point estimation, interval estimation, maximum likelihood estimation, method of moments, etc.



  • Chapter 8: Hypothesis Testing: This chapter explains how to use sample data to test hypotheses about population parameters, such as mean, variance, proportion, etc., using methods such as z-test, t-test, F-test, chi-square test, ANOVA test, etc.



  • Chapter 9: Correlation And Regression Analysis: This chapter explains how to measure and model the relationship between two or more variables using methods such as correlation coefficient, simple linear regression, multiple linear regression, polynomial regression, etc.



  • Chapter 10: Reliability Engineering: This chapter explains how to design, analyze, and improve systems and components that perform their intended functions without failure for a specified period of time under specified conditions using methods such as reliability models, failure analysis, reliability testing, reliability improvement, etc.



  • Chapter 11: Quality Control: This chapter explains how to monitor and improve the quality of products and processes using statistical methods such as control charts, acceptance sampling, Pareto analysis, cause-and-effect diagrams, etc.



rich in examples and exercises. The book provides numerous examples and exercises that illustrate and reinforce the concepts and methods. The examples and exercises are dr


About

Welcome to the group! You can connect with other members, ge...

Members

  • Andrew Panfilov
    Andrew Panfilov
  • Philip Galkin
    Philip Galkin
  • Lois Egbom
  • Melthucelha Smith
    Melthucelha Smith
  • Wesley Gomez
    Wesley Gomez
bottom of page