Assessing Google Classroom's Effectiveness in Communication Skills
Main Article Content
Abstract
Abstract. This study delves into the effectiveness of Google Classroom in enhancing academic
performance among first-year engineering students in a communication skills course. Rooted in
the Technological Pedagogical Content Knowledge (TPACK) model and the Diffusion of
Innovations theory, the research tests two hypotheses regarding the impacts of technology use
and the instructor factor on student outcomes. With a sample of 356 students, the analysis
employs t-tests and regression analysis to compare performance between students using Google
Classroom and traditional teaching methods. The results reveal no significant difference in
performance between the two groups, suggesting that integrating Google Classroom does not
inherently enhance academic outcomes. However, marginal effects of instructor involvement
were observed, underscoring the intricate interplay of human and technological factors in
educational settings. The implications of these findings are significant, as they provide a nuanced
understanding of the role of technology in education and the importance of effective instructor
engagement. Future research should examine technology's role across diverse disciplines and the
dynamics of instructor-student interactions within technology-enhanced environments.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
Asaad, R., & Ali, R. (2019). Back Propagation Neural Network(BPNN) and Sigmoid Activation Function in Multi-Layer Networks.Academic Journal Of Nawroz University,8(4), 216. doi: 10.25007/ajnu.v8n4a464.[2] Zhang C., Zhang Z.A, (2010). Survey of Recent Advances in Face Detection.Microsoft Corporation; Albuquerque, NM, USA. TechReport, No. MSR-TR-2010-66.[3] 3.Ekman P., Friesen W., Hager J.(2002). Facial Action Coding System: The Manual on CD ROM.A Human Face; Salt Lake City, UT, USA.[4] Li, M., Zang, S., Zhang, B., Li, S., & Wu, C. (2014). A review of remote sensing image classification techniques: The role of spatial-contextual information.European Journal of Remote Sensing,47(1), 389-411.[5] Kwon, O. W., Chan, K., Hao, J., & Lee, T. W. (2003). Emotion recognition by speech signals. InEighth European Conference on Speech Communication and Technology.[6] Schuller, B., Rigoll, G., & Lang, M. (2003, April). Hidden Markov model-based speech emotion recognition. In2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03).(Vol. 2, pp. II-1). IEEE.[7] El Ayadi, M., Kamel, M. S., & Karray, F. (2011). Survey on speech emotion recognition: Features, classification schemes, and databases.Pattern Recognition,44(3), 572-587.[8] Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., & Taylor, J. G. (2001). Emotion recognition in human-computer interaction.IEEE Signal processing magazine,18(1), 32-80.[9] Nwe, T. L., Foo, S. W., & De Silva, L. C. (2003). Speech emotion recognition using hidden Markov models.Speech communication,41(4), 603-623.[10] Busso, C., Lee, S., & Narayanan, S. (2009). Analysis of emotionally salient aspects of fundamental frequency for emotion detection.IEEE transactions on audio, speech, and language processing,17(4), 582-596