AI Engineering Degree Practice Exam 2025 - Free AI Engineering Practice Questions and Study Guide

Question: 1 / 400

What is a potential drawback of k-means clustering?

It can only cluster binary data.

It is sensitive to cluster initialization.

K-means clustering is indeed sensitive to cluster initialization, making this a significant drawback. When the algorithm starts, it randomly selects initial centroids for the clusters. Depending on these starting points, the algorithm may converge to a local minimum rather than the global minimum, leading to suboptimal clustering results. This sensitivity implies that different runs of K-means can yield different cluster configurations if the initial centroids are not carefully chosen or if the process is not repeated multiple times with different initializations.

In practical applications, this means that careful consideration must be given to how the initial points are selected. Techniques such as K-means++ can help mitigate this issue by using a smarter initialization strategy, thereby improving the likelihood of finding better clusters more consistently.

Other options present characteristics that do not accurately describe K-means. For instance, K-means can handle continuous numerical data rather than only binary data, and it does not inherently manage high-dimensional datasets well due to the curse of dimensionality. Furthermore, while feature scaling is not a requirement for running K-means, applying normalization or standardization can lead to more meaningful results, especially when dimensions are on different scales.

Get further explanation with Examzify DeepDiveBeta

It can handle high-dimensional datasets easily.

It does not require feature scaling.

Next Question

Report this question

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy