Data advances have influenced each part of human movement and play a possible part to play in the field of schooling and preparing, uniquely, in distance training to change it into an inventive type of involvement. The need of new advances in showing learning measure develops further and quicker. The data age turns into a period of information giving sound and unequaled attainability to disclosure, trade of data, correspondence and investigation to fortify the showing learning measure. Data innovations help in advancing chances of information sharing all through the world. These can help the instructors and understudies having forward-thinking data and information. Exact and right data is vital for compelling educating and learning; and data advances are "set of instruments that can assist with giving the perfect individuals the ideal data at the ideal time." Understudies are free and they can settle on most ideal choices about their examinations, learning time, spot and assets. Understudies can work in collective and intelligent learning conditions adequately imparting, sharing data and trading thoughts and learning encounters with all in the climate.
Topicality and demand of the theme of dissertation. In world practice of informatization, systems of electronic document interchange (EDIS) arc considered and introduced last years not only as systems for automation of manage processes but also as high-grade platforms for creation of uniform information field, so borders of their use, certainly, expands and scientists of the world raises interest to them. In researches of leading scientists of infocommunication technologies the demand of problems of authentic gathering, transfer, analysis, coding of information during formation of office-work documents is allocated for getting of effective technologies raising mobility and productivity of EDIS.
Complex measures undertaken by the Government of Republic of Uzbekistan on development of systems of region-territorial automated management and to creation of uniform information field arc directed on wide introduction of information systems, EDIS, databases (DB) on the basis of modern information-communication technologies. In this connection, working out of new methods to intelligent processing of information resources used for improvement of data transfer quality, allowing effectively to find out and correct errors in structure of electronic document interchange with least material and time expenses, differs a special urgency and, at the same time, remains the unresolved theoretical and applied problem having important economic value.
Requirements to information resources and streams of data transmission as the important factor of efficiency and quality of EDIS functioning arc expressed in providing of stability, integrity, safety and authenticity of the information. One of important among factors is the criterion of authenticity of the information, caused by distortion of transferred messages in infocommunication networks because of failures and refusals of means, any handicaps in communication channels, errors of operators, scanning and recognition systems.
Hence, construction of effective systems to control of information authenticity during transfer and processing represents special scientific interest as priority technology of data processing, characteristic for conditions of automated management and electronic document interchange at the enterprises and organizations.
For existing methods, despite of providing the high level of information transfer authenticity, some unsolved questions is typical, and as basic of them it is possible to allocate the following: at development technologies of electronic documents interchange in structure of modern data transfer packages the significant volume of resource is spent for headings, at the same time the most part of information in headings remains constant from package to package during stream of transfer of the whole frame (redundancy of the information arising at it and mechanisms to provide reliability of deliveries consisting, basically, in sending the appropriate message and repeated transfer of packages, - result to additional time and material expenses while errors detection and elimination); code and hardware methods of information transfer control arc focused, mainly, on elimination the transposition mistakes in managing fields of packages; however, at data transfer there arc also distortions in information fields, which frequently reveal as multiple text mistakes.
In this relation, the decision of tasks for providing of information transfer authenticity is reasonable to consider in two aspects.
Solutions of the first type tasks should taken into account errors of the man-operator, scanning and other devices intended for input of the information. These kinds of mistakes make greatest volume of distortions in text and arise at Applied and Representation levels of OSI model (Open System Interconnection reference model).
Solutions of second type tasks, devoted to control of the information reliability, take into consideration the probability of distortions which occur at stages of Transport, Network, Physical and Line of model OSI.
Demand of the dissertation is characterized by the fact that introduction of a wide range of IP-enabled technologies in electronic document interchange is connected to requiring close attention tasks of detection and correction of errors during preparation and processing of documents.
This research work is focused on providing realization of laws of the Republic of Uzbekistan «On informatization», «On electronic digital signature», «On electronic document», «On electronic commerce», «On electronic payments», Decree of the Cabinet of Ministers of the Republic of Uzbekistan № 126 on 05.04.2011 «About measures on installation and use of a single secured e-mail and system of electronic document interchange in the executive office of cabinet of ministers, bodies of government and economic governance, local government».
Following that, the solution of listed tasks requires carrying out the special researches and development connected to creation of methods and algorithms, capable to control the information authenticity in structure of data transfer packages at the expense of use the enclosed redundancy, and able to function in transport environment, eliminating existing lacks. This fact causes necessity of allocation of a special class algorithms for providing of information authenticity on the basis of new type of PR-rcdundancy (property redundancy), defined by depending on properties of processed object.
Purpose of research is development ol constructive methods, models, algorithms and systems of information authenticity control during transfer and processing of the data on the basis of mechanisms used PR-rcdundancy of various nature, and software and algorithmic realization of results for developing technologies of electronic document interchange.
Scientific novelty of disscrtational research consists in the following:
concept, methodology and software and algorithmic bases to construction methods, models and algorithms for the information authenticity control in systems of electronic document interchange arc developed, classes of objects characterized by PR-rcdundancy, applied to provide accuracy, integrity, efficiency, compression, availability of information resources in EDIS arc allocated;
methods and algorithms arc offered for control of information reliability at the expense of use the artificial redundancy on the basis of linear, modular, plane summing mechanisms and definition of belonging to the coded subsets;
methods and software complexes arc developed for control of information reliability at the expense of use the natural redundancy on the basis of algorithms in which procedures of statistical, arithmetic, parsing coding, n-gram structured description, statistical pattern recognition and hashing of text elements arc realized;
for control and correction of spelling mistakes in texts on Uzbek language methods and algorithms arc offered on the basis of models of multilevel morphological analysis and n-grams Grammatik description;
on the basis of enclosed logic criteria, database and knowledgebase in structure of the built-in expert systems methods and algorithms arc developed for the control of information authenticity at the expense of use the structural-technological PR-rcdundancy;
methods arc offered to synthesis algorithms of text information reliability control in interactive system of errors detection and correction for developing technologies of electronic document interchange.
Conclusion
1. Developed in the dissertation constructive methods, models, software and algorithms complexes to control of information authenticity by principles of using of PR-rcdundancy on the basis of concepts of system analysis, control and information processing allow to increase efficiency and productivity of EDIS.
2. The estimation of current state of the theory and practice of code, hardware and program methods of control of authenticity information transfer had shown insufficiency of existing types of redundancy for providing qualitative functioning of EDIS. Principles of use of PR-rcdundancy at electronic documents for working out applied methods, software and algorithmic complexes to providing of information transfer authenticity have allowed to design toolkit of development of existing technologies.
3. Methods of: linear, plane and modular summation; codings by rules of Haffman, Lempcl-Ziv-Velch, Barrousc-Willcr, arithmetic coding, statistical recognition, logic control form methodical bases of use of PR-rcdundancy for expansion possibilities of algorithms and software complexes to providing of the information authenticity during drawing up, transfer and processing of electronic documents texts.
4. Efficiency of developed algorithms is shown and solutions arc got for tasks of control the information authenticity on the basis of criterion of mistakes undctccting probabilities. It is established, that they find up to 92 % of all kinds of mistakes, capable to correct single, double and adjacent transpositional mistakes, in comparison with existing methods reduce labour content and cost of control in 2-3 times if probability of mistakes is accepted as P«4-10-3, and also raise the information authenticity up to three orders.
5. To solving tasks of control and correction of spelling mistakes in Uzbek texts methods, algorithms and systems arc developed included morphological and n-gram structured models. The developed technique of getting frequency characteristics of n-gram on the basis of distortions probability parameters statistics arc applied during systematization hash-codes for parsing coding.
6. Interpolation and extrapolation methods of construction the logic and arithmetic function of statistical recognition arc used for working out algorithms to control of text elements images authenticity. Methods arc developed to control authenticity of alarm characteristics of text elements images in ncuronctworking system to information processing which includes parts of automatic recognition and control of images signals. Methods and algorithms arc realized in the structure system to control of information authenticity for eases when information in EDIS is represented as metatext on the basis of belonging attributes and classification of metatext on fuzzy semantic hypcrnct.
7. Methods and algorithms of the control of the information authenticity, based on methods of dictionary, statistical and hash-codings provide effective applying of the hardware-software environment of parallel computations NVIDIA with use of standard libraries of numerical analysis, optimized data exchange between CPU and GPU for optimization.
8. It is determined that at realization of the system to control of spelling on the basis of developed ways of description and identification of the software shell, treelike representation of n-gramm grammar and architecture of framework Sfinks-4 focused on various language models used PR-rcdundancy, the number of undetected errors and cost of realization considerably decreases, and labour content in comparison with the spelling control system on the basis of morphological analysis decreases twice.
9. Developed simulating algorithms, complexes of software and systems to control of information authenticity on the basis of using PR-rcdundancy have found practical application in systems of: automated organization of educational environment in high schools; adapted data transfer, processing and analysis in infocommunication networks; EDIS of enterprises.
10. The developed software complexes to control the information authenticity in structure of EDIS and computer system of adapted transfer, handling and data analysis arc implemented in real working conditions in the Samarkand branch «UzTelecom» of the State committee or communication, for informatization and telecommunication technologies of the Republic of Uzbekistan and in Joint Venture «Tasty-Fuit». Appropriate certificates confirm cost of economical efficiency of dissertation results.
In this paper, importance of preprocessing and techniques in this field such as data cleaning, dimensionality reduction, smoothing, normalization are illustrated. During the research we mentioned some details of techniques above. However, our research includes only theoretical aspect of data preprocessing. The data preprocessing phase while arduous and time-intensive stands as the cornerstone of data science, possessing paramount significance. Neglecting the meticulous cleansing and structuring of data has the potential to undermine the integrity and efficacy of subsequent modeling endeavors.
The GDPR reforms existing data protection policy by imposing more stringent obligations on not only data controllers but also on data processors relating to obtaining a valid consent, ensuring transparency of automated decision-making and security of data processing, and by providing new rights for data subjects. Data subjects are entitled to withdraw their consent, request their data to be transferred to another data controller or to be deleted. Also, the GDPR includes certain principles aimed at regulating its cross border transfers of the EU citizens’ personal data to ensure a high level of protection outside the EU.
The article discusses the experience of foreign countries in the use of digital technologies in combating corruption. It was determined that in a number of foreign countries, along with already approved technologies (e-government, information and crowdsourcing platforms), modern information technologies are being actively introduced, such as: technologies for processing large amounts of data (Big Data), distributed ledger (DLT), blockchain, data mining (Data Mining), intellectual analysis in the field of anti-corruption in public procurement, analytical tools for auditors (Forensic Tools), electronic verification systems for declarations of income, expenses, assets and interests of civil servants, electronic anti-corruption technologies in the implementation of the electoral process, etc. ...
It has been determined that the benefits of digitalization can only be realized with the appropriate infrastructures, regulations, financial resources and personnel trained in ICT.
It has been substantiated that the processes of digitalization of law enforcement activities contribute to an increase in the effectiveness of the anti-corruption policy, ensure its effectiveness, objectivity, reduce the cost of maintaining law and order, and minimize the influence of the human factor in this area.
It is noted that technologies based on neural networks and decentralized, synchronized databases will fundamentally change the nature of public administration and can significantly reduce the risks of corruption offenses in the future
Censored data, where the exact value of an observation is not fully observed, poses a challenge in statistical modelling. Traditional regression approaches often fail to adequately handle such data, leading to biased estimates and inaccurate predictions. In this study, we propose a novel anti-regression framework specifically designed for censored data modelling. The framework integrates advanced statistical techniques and incorporates mechanisms to mitigate the impact of censoring. By leveraging the information available from censored observations, our approach provides more reliable estimates and improved predictive performance compared to traditional regression methods. We validate the effectiveness of our framework through extensive simulations and real-world case studies. The results demonstrate the superiority of the proposed anti-regression framework in accurately modelling censored data, highlighting its potential for various applications in fields such as medical research, finance, and engineering. This study contributes to the advancement of statistical modelling techniques for censored data and provides a valuable tool for researchers and practitioners dealing with such data in their analyses.
This research is on the onomatopoeia or onomatopoeic expressions or words and meaning of onomatopoeias found in the children’s literature “Paddington” by Michael Bond, data analysis was classified into four classifications, the classification of the types of onomatopoeia based on Bredin (1996), the classification by the sound it describes, onomatopoeia meaning based on dictionary and Contextual meaning. This study was descriptive qualitative research. The result of this research showed that there were 103 onomatopoeia or onomatopoeic words that could be found in the comic. From the 103 expressions, 66 onomatopoeia expressions are similar to 38 onomatopoeic expressions found by the researcher. Based on the types, 23 of the data belong to direct onomatopoeia, 9 belong to Associative onomatopoeia and 6 belong to Exemplary on omatopoeia. Based on the classification of onomatopoeia by the sound it describes the researcher found 17 Human sound data, 21 other sound data, 2 animal sound data, and 1 instrument sound data. Most of the data were direct onomatopoeia because most of the onomatopoeic expressions in the children’s book entitled “Paddington” were words, which represent the sound of action.
This article presents a systematic literature review of studies that have used data mining and learning analytics techniques to predict student performance. The review covers a period of 10 years (2011-2021) and examines a total of 50 papers from various sources. The results show that data mining and learning analytics techniques have been widely used to predict student performance in different educational contexts, including K-12, higher education, and online learning. The most commonly used data mining and learning analytics techniques were decision trees, logistic regression, neural networks, and support vector machines. The review identifies the main challenges and limitations of using data mining and learning analytics techniques for predicting student performance, including issues related to data quality, feature selection, model validation, and ethical considerations. The article concludes with recommendations for future research in this area.
In today's digital age, the proliferation of cyber threats poses significant risks to organizations' systems and data. Cyber threat intelligence (CTI) has emerged as a vital component of modern cybersecurity strategies, enabling organizations to proactively defend against potential threats. This article explores how organizations gather and analyze information about cyber threats to protect their systems and data.
By leveraging diverse data sources such as open-source intelligence, dark web monitoring, and information sharing, organizations enhance their understanding of threat actors' tactics and motivations. Advanced analysis techniques, including data mining, machine learning, and human expertise, enable organizations to identify emerging threats and prioritize their response effectively.
The implementation of CTI has led to improved threat detection, enhanced situational awareness, and proactive defense measures. However, challenges such as data overload and the need for skilled analysts must be addressed to maximize the effectiveness of CTI. Ultimately, CTI plays a crucial role in fortifying organizations' resilience against the evolving cyber threat landscape.
The article examines the impact of technological innovations on data processing in international commercial arbitration. The focus is on two aspects: the opportunities that technology provides to improve the efficiency and quality of arbitration procedures, and the threats associated with cybersecurity and data privacy. The authors analyze how modern technologies can facilitate the collection, analysis and exchange of information in the course of proceedings, but
also highlight the risks associated with the storage and transmission of confidential
information in digital form. The article calls for greater regulation and enforcement of privacy and data protection laws in the context of international arbitration.
In an era where data security is paramount, this study introduces a groundbreaking approach to fortify information integrity through advanced techniques in censored data modeling and anti-regression innovation. We delve into the intricacies of safeguarding sensitive insights, pushing the boundaries of conventional methodologies. The framework presented in this research not only enhances predictive accuracy but also ensures robust protection against potential threats, thus redefining the landscape of data security.
Subjects of the research: The diagnostics of data transmission systems’ (DTS) elements under operation terms, methods of the control of DTA (data transmission apparatus) and diagnostics of digital devises on the base of signature analysis.
Purpose of work: Investigation and development of effective control methods and diagnostic of data transmission systems’ elements.
.Method of research: On solving of given problems analytical and program methods of investigation have been used, including elaborated models and methodic with following processing and analyses of the received results. Analytical methods were based on probability theory, flow - chart theory, reliability theory, algebra logic theory, machine modeling theory.
The results obtained and their novelty: The cascade model of error source of discreet channel and the strategy of diagnostics and restoring the efficiency of DTS elements. The mathematic model of embedded control over DTS elements with and without self control and evaluation of volumes value and efficiency. Evaluation methodic of sample signatures reliability and calculation. The algorithms of despairs detection at signature analysis application, minimizing the time of search. An imitation model for evaluating the methods of compact testing and sample signatures shaping.
Practical value: The elaborated methodic, algorithms and programs are recommended for practical use at designing of control - diagnostic provision at the stage of operation of data transmission systems.
Degree of embed and economic cffcctivitv: The results of dissertation work have been adopted in AK «Uzbektelecom». The theoretical and practical results have been used at TU1T on «Telecommunications» specialty and 5A522205 - «Communication networks and control systems» specialty.
Field of application: The proposed methodic, algorithms and programs could be widely used at operation of data transmission systems, development of control - diagnostic supply of digital systems and devices of telecommunication equipment.
In world practice, it is important to develop targeted strategies aimed at further improving the business environment, implementing active business projects to en sure sustainable economic development, pursuing economic policies that have a positive effect on business development, conducting research aimed at using blockchain technology as well as ensuring interest in business development by state and society. The issues of doing business after the global crisis caused by the pandemic, statistical observation, and forecasting issues, statistical assessment of the impact of the pandemic on business entities, and improving methods of statistical analysis of business development processes during and after the pandemic became important as never before. As a result of the conducted research, the authors have reached assessing quantitative and qualitative indicators of factors affecting business development processes has been developed, and software has been created that makes the accurate analyzes on the basis of blockchain of the data, a comprehensive analytical approach was developed based on statistical indicators characterizing business development trends in the country, in the SNA sectors, in the context of key industries and regions, multifactor empirical models were developed and forecast options for 2021-2026 were proposed using the scenario method. The theoretical approaches and the initial data used are taken from offcial sources, based on the data of business entities operating in Uzbekistan, proposals, and recommendations implemented in practice, statistical data of the State Committee on Statistics of the Republic of Uzbekistan, as well as primary data obtained during the study. The proposed blockchain stages in the busine process accurately evaluate the results that meet the requirements of digital economy.
The remarkable development of accessible data sources has enormously impacted the admittance to useable wellbeing data. As an outcome, restoratively one-sided data has become hard to use for navigation. In this paper, we consider these outcomes and present an improved technique for getting to wellbeing data continuously. The methodology includes the utilization of the vapnik Backing Vector Machine process for text grouping. The proposed technique was frameworked on php/mysql for web client. Trial arrangement shows that the strategy outflanks the pattern in the Accuracy, Review and F1 measures. An expansion utilizing the Gaussian portion is suggested in the paper.
Intrusion Detection and prevention System (IDS, IDPS) is one of the solutions implemented against malicious network attacks. Implementing an IDS and IDPS systems is difficult because attackers are constantly changing the tools and methods they use. This article presents the challenges and benefits of using data mining technologies to mitigate network attacks. Methods and tools have been developed for the formation of an intrusion detection system based on data mining of operational analysis and effective response, allowing to get rid of some known shortcomings of signature search and anomaly detection systems
This study was to investigate the effect of procedural justice on the psychological well-being of the teachers of the Archdiocese of Bamenda, Northwest Region of Cameroon. The study employed a sequential explanatory research design. The sample for the study comprised 270 teachers working in Catholic nursery, primary and secondary schools within the Mankon, Bayelle and Bambui Deaneries of the Archdiocese of Bamenda. Quantitative data were collected with the help of a questionnaire while Focus Group Discussions were conducted to generate qualitative data. Quantitative data were analyzed using the linear regression technique. Qualitative data generated from Focused Group Discussions were analyzed using the technique of thematic analysis. Findings from Quantitative analysis of data revealed that 63% of teachers’ indicated that teachers were satisfied with procedural justice within the Bamenda Archdiocesan Catholic Education Agency and that procedural justice had a significant effect on the psychological well-being of teachers of the Archdiocese of Bamenda. Based on these findings, the study recommended that the government should effectively follow-up, monitor and audit the channels of paying subvention funds to beneficiary private sector teachers to ensure that the money reaches the intended beneficiaries uncompromised.