-
McCleary posted an update 7 months, 2 weeks ago
The internal consistency reliability study was thereafter conducted by 34 software engineer researchers. All constructs of the content-validated model were found to be reliable in this study. The final version of the model consists of 6 constructs and 44 items. These six constructs and their associated items are as follows (1) Configuration (eight items), (2) Composition (four items), (3) Extension (six items), 4) Integration (eight items), (5) Modification (five items), and (6) SaaS quality (13 items). The results of the study may contribute to enhancing the capability of empirically analyzing the impact of software customization on SaaS quality by benefiting from all resultant constructs and items.Mutation testing is a method widely used to evaluate the effectiveness of the test suite in hardware and software tests or to design new software tests. check details In mutation testing, the original model is systematically mutated using certain error assumptions. Mutation testing is based on well-defined mutation operators that imitate typical programming errors or which form highly successful test suites. The success of test suites is determined by the rate of killing mutants created through mutation operators. Because of the high number of mutants in mutation testing, the calculation cost increases in the testing of finite state machines (FSM). Under the assumption that each mutant is of equal value, random selection can be a practical method of mutant reduction. However, in this study, it was assumed that each mutant did not have an equal value. Starting from this point of view, a new mutant reduction method was proposed by using the centrality criteria in social network analysis. It was assumed that the central regions selected within this frame were the regions from where test cases pass the most. To evaluate the proposed method, besides the feature of detecting all failures related to the model, the widely-used W method was chosen. Random and proposed mutant reduction methods were compared with respect to their success by using test suites. As a result of the evaluations, it was discovered that mutants selected via the proposed reduction technique revealed a higher performance. Furthermore, it was observed that the proposed method reduced the cost of mutation testing.
In the modern world, millions of people suffer from fake and poor-quality medical products entering the market. Violation of the rules of transportation of drugs makes them ineffective and even dangerous. The relationship between the various parts of the supply chain, production and regulation of drugs is too hard and has many problems. Distributed ledger technology is a distributed database, the properties of which allow us to track the entire path of medical products from the manufacturer to consumer, to improve the current model of the supply chain, to transform the pharmaceutical industry and prevent falsified drugs reach the market.
The aim of the article is to analyze the distributed ledger technology as an innovative means of poor-quality pharmaceuticals prevention to reach the market as well as their forehanded detection.
Content analysis of web sites of companies developing distributed ledger technology solutions had been performed. Five examples found with a google search engine by keywords “dy pharmaceuticals supply chain structure and the distributed ledger technology capacities to improve pharmaceutical companies has been carried out and presented. Furthermore, the article allows getting acquainted with today’s projects released to the market as well as the prognosis of the distributed ledger technology in pharmaceutical industry enhancement in the future.We investigate the automatic design of communication in swarm robotics through two studies. We first introduce Gianduja an automatic design method that generates collective behaviors for robot swarms in which individuals can locally exchange a message whose semantics is not a priori fixed. It is the automatic design process that, on a per-mission basis, defines the conditions under which the message is sent and the effect that it has on the receiving peers. Then, we extend Gianduja to Gianduja2 and Gianduja 3, which target robots that can exchange multiple distinct messages. Also in this case, the semantics of the messages is automatically defined on a per-mission basis by the design process. Gianduja and its variants are based on Chocolate, which does not provide any support for local communication. In the article, we compare Gianduja and its variants with a standard neuro-evolutionary approach. We consider a total of six different swarm robotics missions. We present results based on simulation and tests performed with 20 e-puck robots. Results show that, typically, Gianduja and its variants are able to associate a meaningful semantics to messages.Shipborne radars cannot only enable navigation and collision avoidance but also play an important role in the fields of hydrographic data inspection and disaster monitoring. In this paper, target extraction methods for oil films, ships and coastlines from original shipborne radar images are proposed. First, the shipborne radar video images are acquired by a signal acquisition card. Second, based on remote sensing image processing technology, the radar images are preprocessed, and the contours of the targets are extracted. Then, the targets identified in the radar images are integrated into an electronic navigation chart (ENC) by a geographic information system. The experiments show that the proposed target segmentation methods of shipborne radar images are effective. Using the geometric feature information of the targets identified in the shipborne radar images, information matching between radar images and ENC can be realized for hydrographic data inspection and disaster monitoring.
In the last twenty years, new methodologies have made possible the gathering of large amounts of data concerning the genetic information and metabolic functions associated to the human gut microbiome. In spite of that, processing all this data available might not be the simplest of tasks, which could result in an excess of information awaiting proper annotation. This assessment intended on evaluating how well respected databases could describe a mock human gut microbiome.
In this work, we critically evaluate the output of the cross-reference between the Uniprot Knowledge Base (Uniprot KB) and the Kyoto Encyclopedia of Genes and Genomes Orthologs (KEGG Orthologs) or the evolutionary genealogy of genes Non-supervised Orthologous groups (EggNOG) databases regarding a list of species that were previously found in the human gut microbiome.
From a list which contemplates 131 species and 52 genera, 53 species and 40 genera had corresponding entries for KEGG Database and 82 species and 47 genera had corresponding entries for EggNOG Database.