Advanced AI systems like chatGPT are being studied for their potential to improve student learning outcomes. Can chatGPT act as a reliable intelligent tutor to assist engineering students in their learning journeys? Do these large language models exhibit the sufficient reasoning skills needed as a tutor?
To examine chatGPT’s knowledge and reasoning capacity, we had it take a basic engineering mechanics test. We then compared its performance to that of a group of engineering students. The test was comprised of 20 questions.
Consider a beam with an overhang. Does the applied load cause a bending moment at support A? If yes, what is the magnitude of the moment?
For the beam given in the previous question, what is the direction of the bending moment caused by the vertical reaction at the roller support about point A?
Consider this simply supported beam and its free body diagram. Given equations 1 through 4, select the equations that represent the correct equilibrium conditions for the beam.
A beam subjected to a vertical load rests on two roller supports. Calculate the beam support reactions.
What is the main difference between a beam and a truss? How differently do the two structures carry their loads?
To quantify the performance of chatGPT and the students in this test, we can assign points to each response. Here is the distribution of students’ grades.
Based on this grading system, a student who achieves a score of 80% or higher is deemed to have a good understanding of the subject and should be capable of assisting other students with their questions. ChatGPT’s performance on this test was at 70%.
Further research and testing are necessary to determine whether chatGPT’s reasoning skills can be improved for tutoring purposes.