| | June 20198CIOReviewias Variance trade-offs is essential in analyzing the data which needs to be used in machine learning algorithms. In a previous article I wrote I explained the parameters and the importance of choosing the right parameters or right number of parameters. If the machine learning model is too simple and has very few parameters then it may have high bias and low variance. On the other hand if our model has large number of parameters then it's going to have high variance and low bias. So we need to find the right balance or we also call it as "good balance" without over-fitting and under-fitting the data. The complexity of the problem statement helps with a trade-off between bias and variance. An algorithm can't be more complex and less complex at the same time.Psychology helps us to under machine learning, at the end of the day we are humans who have built these machines, and even if machines only understand `zero' and `one'; we humans need simple examples or stories to understand a theory. Putting the Psychology hat on, Bias is defined as "inclination or prejudice for or against one person or group, especially in a way considered to be unfair." Unless we understand bias, true balance cannot be achieved. It easier said than done.Every time I talk or think of Bias the `Okra' stories comes to my mind. What's Okra? It's a vegetable consumed in many parts of the world; however it's treated differently depending IN MY OPINIONBUNDERSTANDING BIAS VARIANCE TRADE-OFFS IN MACHINE LEARNINGBy Manuj Desai, A Technology Leader & Author of Clinch the Deal ­ Negotiation Strategies To Get What You WantManuj Desai
< Page 7 | Page 9 >