Offshore wind-photovoltaic hybrid systems
Main Article Content
Abstract
Abstract. This paper conducts a comprehensive analysis of offshore wind-photovoltaic hybrid systems, presenting autonomous components tailored for maritime applications. Focusing on the critical role of these systems in supplying electricity to regions with limited infrastructure, particularly maritime zones, the study explores economic advantages over traditional power line installations, emphasizing remote areas. Key components, including battery storage, autonomy, battery selection (with a spotlight on lead-acid batteries), charge controllers, inverters, and the integration of Automatic Identification System (A.I.S.) for maritime safety, are discussed. The article concludes by underscoring the significance of power supply buoys for ship safety and efficient refueling near hybrid systems.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
Asaad, R., & Ali, R. (2019). Back Propagation Neural Network(BPNN) and Sigmoid Activation Function in Multi-Layer Networks.Academic Journal Of Nawroz University,8(4), 216. doi: 10.25007/ajnu.v8n4a464.[2] Zhang C., Zhang Z.A, (2010). Survey of Recent Advances in Face Detection.Microsoft Corporation; Albuquerque, NM, USA. TechReport, No. MSR-TR-2010-66.[3] 3.Ekman P., Friesen W., Hager J.(2002). Facial Action Coding System: The Manual on CD ROM.A Human Face; Salt Lake City, UT, USA.[4] Li, M., Zang, S., Zhang, B., Li, S., & Wu, C. (2014). A review of remote sensing image classification techniques: The role of spatial-contextual information.European Journal of Remote Sensing,47(1), 389-411.[5] Kwon, O. W., Chan, K., Hao, J., & Lee, T. W. (2003). Emotion recognition by speech signals. InEighth European Conference on Speech Communication and Technology.[6] Schuller, B., Rigoll, G., & Lang, M. (2003, April). Hidden Markov model-based speech emotion recognition. In2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03).(Vol. 2, pp. II-1). IEEE.[7] El Ayadi, M., Kamel, M. S., & Karray, F. (2011). Survey on speech emotion recognition: Features, classification schemes, and databases.Pattern Recognition,44(3), 572-587.[8] Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., & Taylor, J. G. (2001). Emotion recognition in human-computer interaction.IEEE Signal processing magazine,18(1), 32-80.[9] Nwe, T. L., Foo, S. W., & De Silva, L. C. (2003). Speech emotion recognition using hidden Markov models.Speech communication,41(4), 603-623.[10] Busso, C., Lee, S., & Narayanan, S. (2009). Analysis of emotionally salient aspects of fundamental frequency for emotion detection.IEEE transactions on audio, speech, and language processing,17(4), 582-596