Full-text resources of PSJD and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Refine search results

Journals help
Years help
Authors help
Preferences help
enabled [disable] Abstract
Number of results

Results found: 26

Number of results on page
first rewind previous Page / 2 next fast forward last

Search results

Search:
in the keywords:  machine learning
help Sort By:

help Limit search:
first rewind previous Page / 2 next fast forward last
EN
Spamming is the act of abusing an electronic messaging system by sending unsolicited bulk messages. Filtering of these messages is merely another line of defence and does not prevent spam messages from circulating in email systems. This problem causes users to distrust email systems, suspect even legitimate emails and leads to substantial investment in technologies to counter the spam problem. Spammers threaten users by abusing the lack of accountability and verification features of communicating entities. To contribute to the fight against spamming, a cloud-based system that analyses the email server logs and uses predictive analytics with machine learning to build trust identities that model the email messaging behavior of spamming and legitimate servers has been designed. The system constructs trust models for servers, updating them regularly to tune the models. This study proposed that this approach will not only minimize the circulation of spam in email messaging systems, but will also be a novel step in the direction of trust identities and accountability in email infrastructure.
Open Physics
|
2011
|
vol. 9
|
issue 3
874-883
EN
The aim of the present work is to use one of the machine learning techniques named the genetic programming (GP) to model the p-p interactions through discovering functions. In our study, GP is used to simulate and predict the multiplicity distribution of charged pions (P(n ch)), the average multiplicity (〈n ch〉) and the total cross section (σ tot) at different values of high energies. We have obtained the multiplicity distribution as a function of the center of mass energy ($$ \sqrt s $$) and charged particles (n ch). Also, both the average multiplicity and the total cross section are obtained as a function of $$ \sqrt s $$. Our discovered functions produced by GP technique show a good match to the experimental data. The performance of the GP models was also tested at non-trained data and was found to be in good agreement with the experimental data.
EN
Introduction This review aims to present briefly the new horizon opened to pharmacology by the deep learning (DL) technology, but also to underline the most important threats and limitations of this method. Material and Methods We searched multiple databases for articles published before May 2021 according to the preferred reported item related to deep learning and drug research. Out of the 267 articles retrieved, we included 50 in the final review. Results DL and other different types of artificial intelligence have recently entered all spheres of science, taking an increasingly central position in the decision-making processes, also in pharmacology. Hence, there is a need for better understanding of these technologies. The basic differences between AI (artificial intelligence), DL and ML (machine learning) are explained. Additionally, the authors try to highlight the role of deep learning methods in drug research and development as well as in improving the safety of pharmacotherapy. Finally, future directions of DL in pharmacology were outlined as well as possible misuses of it. Conclusions DL is a promising and powerful tool for comprehensive analysis of big data related to all fields of pharmacology, however it has to be used carefully.
PL
WSTĘP: ChatGPT jest modelem językowym stworzonym przez OpenAI, który może udzielać odpowiedzi na zapytania użytkownika, generując tekst na podstawie otrzymanych danych. Celem pracy była ocena wyników działania ChatGPT na polskim Lekarskim Egzaminie Końcowym (LEK) oraz czynników wpływających na odsetek prawidłowych odpowiedzi. Ponadto zbadano zdolność chatbota do podawania poprawnego i wnikliwego wyjaśnienia. MATERIAŁ I METODY: Wprowadzono 591 pytań z dystraktorami z bazy LEK do interfejsu ChatGPT (wersja 13 lutego – 14 marca). Porównano wyniki z kluczem odpowiedzi i przeanalizowano podane wyjaśnienia pod kątem logicznego uzasadnienia. Dla poprawnych odpowiedzi przeanalizowano spójność logiczną wyjaśnienia, natomiast w przypadku odpowiedzi błędnej obserwowano zdolność do poprawy. Wybrane czynniki zostały przeanalizowane pod kątem wpływu na zdolność chatbota do udzielenia poprawnej odpowiedzi. WYNIKI: ChatGPT osiągnął imponujące wyniki poprawnych odpowiedzi na poziomie: 58,16%, 60,91% i 67,86%, przekraczając oficjalny próg 56% w trzech ostatnich egzaminach. W przypadku poprawnie udzielonych odpowiedzi ponad 70% pytań zostało popartych logicznie spójnym wyjaśnieniem. W przypadku błędnych odpowiedzi w 66% przypadków chatbot podał pozornie poprawne wyjaśnienie dla nieprawidłowych od-powiedzi. Czynniki takie jak konstrukcja logiczna (p < 0,05) i wskaźnik trudności zadania (p < 0,05) miały wpływ na ogólną ocenę, podczas gdy liczba znaków (p = 0,46) i język (p = 0,14) takiego wpływu nie miały. WNIOSKI: Mimo iż ChatGPT osiągnął wystarczającą liczbę punktów, aby zaliczyć LEK, w wielu przypadkach podawał wprowadzające w błąd informacje poparte pozornie przekonującym wyjaśnieniem. Chatboty mogą być szczególnym zagrożeniem dla użytkownika niemającego wiedzy medycznej, ponieważ w porównaniu z wyszukiwarką internetową dają natychmiastowe, przekonujące wyjaśnienie, co może stanowić zagrożenie dla zdrowia publicznego. Z tych samych przyczyn ChatGPT powinien być ostrożnie stosowany jako pomoc naukowa.
EN
INTRODUCTION: ChatGPT is a language model created by OpenAI that can engage in human-like conversations and generate text based on the input it receives. The aim of the study was to assess the overall performance of ChatGPT on the Polish Medical Final Examination (Lekarski Egzamin Końcowy – LEK) the factors influencing the percentage of correct answers. Secondly, investigate the capabilities of chatbot to provide explanations was examined. MATERIAL AND METHODS: We entered 591 questions with distractors from the LEK database into ChatGPT (version 13th February – 14th March). We compared the results with the answer key and analyzed the provided explanation for logical justification. For the correct answers we analyzed the logical consistency of the explanation, while for the incorrect answers, the ability to provide a correction was observed. Selected factors were analyzed for an influence on the chatbot’s performance. RESULTS: ChatGPT achieved impressive scores of 58.16%, 60.91% and 67.86% allowing it pass the official threshold of 56% in all instances. For the properly answered questions, more than 70% were backed by a logically coherent explanation. In the case of the wrongly answered questions the chatbot provided a seemingly correct explanation for false information in 66% of the cases. Factors such as logical construction (p < 0.05) and difficulty (p < 0.05) had an influence on the overall score, meanwhile the length (p = 0.46) and language (p = 0.14) did not. CONCLUSIONS: Although achieving a sufficient score to pass LEK, ChatGPT in many cases provides misleading information backed by a seemingly compelling explanation. The chatbot can be especially misleading for non-medical users as compared to a web search because it can provide instant compelling explanations. Thus, if used improperly, it could pose a danger to public health. This makes it a problematic recommendation for assisted studying.
first rewind previous Page / 2 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.