Aptitude and Tech interview questions (Sunday 08)

Please try your best

  1. A machine can produce 30 identical products in 6 hours. How long will it take the machine to produce 40 identical products at the same rate?
    a. 8 hours b. 9 hours c. 10 hours d. 12 hours

  2. In a class of 45 students, 30 are girls. What percentage of the class is made up of boys?
    a. 30% b. 33% c. 40% d. 45%

  3. A car traveled 120 miles in 2 hours. If the car maintains the same speed, how far will it travel in 5 hours?
    a. 200 miles b. 300 miles c. 400 miles d. 500 miles

  4. Explain EDA in detail and also explain steps involved.

  5. Explain Abstraction with an example.

6 Likes

@kaushal-ta-ds
1-Ans-a. 8 hours
2-Ans- b. 33%
3-Ans- b. 300 miles

3 Likes

@rrnayak2609
Correct :100:
Keep it up :heart:
And try to answer rest two questions also

2 Likes

@kaushal-ta-ds

  1. a. 8 hours
  2. b. 33%
  3. b. 300 miles
1 Like
  1. a) 8 hours,
  2. b) 33%,
  3. b) 300 miles.
  4. Can’t understand, what is EDA?
  5. Don’t know about Abstraction.
1 Like
  1. 8 hours
    2)33%
    3)300 Miles
    4)It is used to summarize the main characteristics of data sets and often used in data visualization methods
    5)Abstraction is a method which is used to hide the main features/features that are irrelevant to the user and to show what exactly the user needed or specific details to the user.
    ex: ATM operations
2 Likes

@shivatejawork
@sanchayitanandi17
@avishekchat7797
Correct :100:

And for last two questions please try to explore more as these are interview questions. Please try to frame your answer in better way so that it will become easy to answer in the interviews.

2 Likes
  1. [a]
  2. [b]
  3. [c]
  4. [Exploratory Data Analysis, or EDA, is an important step in any Data Analysis or Data Science project. EDA is the process of investigating the dataset to discover patterns, and anomalies (outliers), and form hypotheses based on our understanding of the dataset.]
    5.[Abstraction is the process of generalising complex events in the real world to the concepts that underly them, tucking away the complexities of the situation. ]
3 Likes

@sms18680
Awesome :+1:
Keep it up

1 Like

1 a. 8 hours

2 b. 33%

3 b. 300 miles

4 Exploratory Data Analysis (EDA) is one of the techniques used for extracting vital features and trends used by machine learning and deep learning models in Data Science. Thus, EDA has become an important milestone for anyone working in data science. This article covers the concept, meaning, tools, and techniques of EDA to give complete awareness to a beginner wanting to launch a career in data science. The article also enlists those fields that regularly apply EDA efficiently in promoting their business activities.

Steps Involved in Exploratory Data Analysis (EDA)

The key components in an EDA are the main steps undertaken to perform the EDA. These are as follows:

1. Data Collection

Nowadays, data is generated in huge volumes and various forms belonging to every sector of human life, like healthcare, sports, manufacturing, tourism, and so on. Every business knows the importance of using data beneficially by properly analyzing it. However, this depends on collecting the required data from various sources through surveys, social media, and customer reviews, to name a few. Without collecting sufficient and relevant data, further activities cannot begin.

2. Finding all Variables and Understanding Them

When the analysis process starts, the first focus is on the available data that gives a lot of information. This information contains changing values about various features or characteristics, which helps to understand and get valuable insights from them. It requires first identifying the important variables which affect the outcome and their possible impact. This step is crucial for the final result expected from any analysis.

3. Cleaning the Dataset

The next step is to clean the data set, which may contain null values and irrelevant information. These are to be removed so that data contains only those values that are relevant and important from the target point of view. This will not only reduce time but also reduces the computational power from an estimation point of view. Preprocessing takes care of all issues, such as identifying null values, outliers, anomaly detection, etc.

4. Identify Correlated Variables

Finding a correlation between variables helps to know how a particular variable is related to another. The correlation matrix method gives a clear picture of how different variables correlate, which further helps in understanding vital relationships among them.

5. Choosing the Right Statistical Methods

As will be seen in later sections, depending on the data, categorical or numerical, the size, type of variables, and the purpose of analysis, different statistical tools are employed. Statistical formulae applied for numerical outputs give fair information, but graphical visuals are more appealing and easier to interpret.

6. Visualizing and Analyzing Results

Once the analysis is over, the findings are to be observed cautiously and carefully so that proper interpretation can be made. The trends in the spread of data and correlation between variables give good insights for making suitable changes in the data parameters. The data analyst should have the requisite capability to analyze and be well-versed in all analysis techniques. The results obtained will be appropriate to data of that particular domain and are suitable for use in retail, healthcare, and agriculture.

1 Like
  1. a.8 hours
  2. b.33%
  3. b.300 miles
1 Like

@mashettyomkar123
Correct :100: