Decision making underneath measurebased granular uncertainness along with intuitionistic furred models

From Stairways
Jump to navigation Jump to search

Conventional decision trees use queries each of which is based on one attribute. In this study, we also examine decision trees that handle additional queries based on hypotheses. This kind of query is similar to the equivalence queries considered in exact learning. Earlier, we designed dynamic programming algorithms for the computation of the minimum depth and the minimum number of internal nodes in decision trees that have hypotheses. Modification of these algorithms considered in the present paper permits us to build decision trees with hypotheses that are optimal relative to the depth or relative to the number of the internal nodes. We compare the length and coverage of decision rules extracted from optimal decision trees with hypotheses and decision rules extracted from optimal conventional decision trees to choose the ones that are preferable as a tool for the representation of information. To this end, we conduct computer experiments on various decision tables from the UCI Machine Learning Repository. In addition, we also consider decision tables for randomly generated Boolean functions. The collected results show that the decision rules derived from decision trees with hypotheses in many cases are better than the rules extracted from conventional decision trees.Neural networks play a growing role in many scientific disciplines, including physics. Variational autoencoders (VAEs) are neural networks that are able to represent the essential information of a high dimensional data set in a low dimensional latent space, which have a probabilistic interpretation. In particular, the so-called encoder network, the first part of the VAE, which maps its input onto a position in latent space, additionally provides uncertainty information in terms of variance around this position. In this work, an extension to the autoencoder architecture is introduced, the FisherNet. In this architecture, the latent space uncertainty is not generated using an additional information channel in the encoder but derived from the decoder by means of the Fisher information metric. This architecture has advantages from a theoretical point of view as it provides a direct uncertainty quantification derived from the model and also accounts for uncertainty cross-correlations. We can show experimentally that the FisherNet produces more accurate data reconstructions than a comparable VAE and its learning performance also apparently scales better with the number of latent space dimensions.Entropy is a quantity expressing the measure of disorder or unpredictability in a system, and, from a more general point of view, it can be regarded as an irreversible source of energy [...].In the current work, using the framework of the formalism found in the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) equations for the distribution functions of particle groups, the effective single-particle potential near the surface of the liquid was analyzed. The thermodynamic conditions under which a sudden opening of the liquid surface leads to high-energy ejection of atoms and molecules were found. The energies of the emitted particles were observed to be able to significantly exceed their thermal energy. Criteria of the ejection stability of the liquid surface and the self-acceleration of ejection were formulated. The developed theory was used to explain the phenomenon of the self-acceleration of gas-dust outbursts in coal mines during the explosive opening of methane traps. The results also explained the mechanisms of generating significant amounts of methane and the formation of coal nanoparticles in gas-dust outbursts. The developed approach was also used to explain the phenomenon of the self-ignition of hydrogen when it enters the atmosphere.Moth-flame optimization (MFO) algorithm inspired by the transverse orientation of moths toward the light source is an effective approach to solve global optimization problems. However, the MFO algorithm suffers from issues such as premature convergence, low population diversity, local optima entrapment, and imbalance between exploration and exploitation. In this study, therefore, an improved moth-flame optimization (I-MFO) algorithm is proposed to cope with canonical MFO's issues by locating trapped moths in local optimum via defining memory for each moth. The trapped moths tend to escape from the local optima by taking advantage of the adapted wandering around search (AWAS) strategy. The efficiency of the proposed I-MFO is evaluated by CEC 2018 benchmark functions and compared against other well-known metaheuristic algorithms. Moreover, the obtained results are statistically analyzed by the Friedman test on 30, 50, and 100 dimensions. Finally, the ability of the I-MFO algorithm to find the best optimal solutions for mechanical engineering problems is evaluated with three problems from the latest test-suite CEC 2020. The experimental and statistical results demonstrate that the proposed I-MFO is significantly superior to the contender algorithms and it successfully upgrades the shortcomings of the canonical MFO.The paper addresses the problem of complex socio-economic phenomena assessment using questionnaire surveys. The data are represented on an ordinal scale; the object assessments may contain positive, negative, no answers, a "difficult to say" or "no opinion" answers. The general framework for Intuitionistic Fuzzy Synthetic Measure (IFSM) based on distances to the pattern object (ideal solution) is used to analyze the survey data. First, Euclidean and Hamming distances are applied in the procedure. Second, two pattern object constructions are proposed in the procedure one based on maximum values from the survey data, and the second on maximum intuitionistic values. Third, the method for criteria comparison with the Intuitionistic Fuzzy Synthetic Measure is presented. Finally, a case study solving the problem of rank-ordering of the cities in terms of satisfaction from local public administration obtained using different variants of the proposed method is discussed. Additionally, the comparative analysis results using the Intuitionistic Fuzzy Synthetic Measure and the Intuitionistic Fuzzy TOPSIS (IFT) framework are presented.Wearable sensor-based HAR (human activity recognition) is a popular human activity perception method. However, due to the lack of a unified human activity model, the number and positions of sensors in the existing wearable HAR systems are not the same, which affects the promotion and application. In this paper, an information gain-based human activity model is established, and an attention-based recurrent neural network (namely Attention-RNN) for human activity recognition is designed. Besides, the attention-RNN, which combines bidirectional long short-term memory (BiLSTM) with attention mechanism, was tested on the UCI opportunity challenge dataset. Experiments prove that the proposed human activity model provides guidance for the deployment location of sensors and provides a basis for the selection of the number of sensors, which can reduce the number of sensors used to achieve the same classification effect. In addition, experiments show that the proposed Attention-RNN achieves F1 scores of 0.898 and 0.911 in the ML (Modes of Locomotion) task and GR (Gesture Recognition) task, respectively.We extend collisional quantum thermometry schemes to allow for stochasticity in the waiting time between successive collisions. We establish that introducing randomness through a suitable waiting time distribution, the Weibull distribution, allows us to significantly extend the parameter range for which an advantage over the thermal Fisher information is attained. These results are explicitly demonstrated for dephasing interactions and also hold for partial swap interactions. Furthermore, we show that the optimal measurements can be performed locally, thus implying that genuine quantum correlations do not play a role in achieving this advantage. We explicitly confirm this by examining the correlation properties for the deterministic collisional model.Metabolism and physiology frequently follow non-linear rhythmic patterns which are reflected in concepts of homeostasis and circadian rhythms, yet few biomarkers are studied as dynamical systems. For instance, healthy human development depends on the assimilation and metabolism of essential elements, often accompanied by exposures to non-essential elements which may be toxic. In this study, we applied laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) to reconstruct longitudinal exposure profiles of essential and non-essential elements throughout prenatal and early post-natal development. We applied cross-recurrence quantification analysis (CRQA) to characterize dynamics involved in elemental integration, and to construct a graph-theory based analysis of elemental metabolism. Our findings show how exposure to lead, a well-characterized toxicant, perturbs the metabolism of essential elements. In particular, our findings indicate that high levels of lead exposure dysregulate global aspects of metabolic network connectivity. For example, the magnitude of each element's degree was increased in children exposed to high lead levels. Similarly, high lead exposure yielded discrete effects on specific essential elements, particularly zinc and magnesium, which showed reduced network metrics compared to other elements. In sum, this approach presents a new, systems-based perspective on the dynamics involved in elemental metabolism during critical periods of human development.To determine the effects of Ti and mixing entropy (ΔSmix) on the structure and mechanical proper-ties of Zr-Ta alloys and then find a new potential energetic structural material with good me-chanical properties and more reactive elements, TixZr2.5-xTa (x = 0, 0.5, 1.0, 1.5, 2.0) alloys were investigated. The XRD experimental results showed that the phase transformation of TixZr2.5-xTa nonequal-ratio ternary alloys depended not on the value of ΔSmix, but on the amount of Ti atoms. With the addition of Ti, the content of the HCP phase decreased gradually. SEM analyses revealed that dendrite morphology and component segregation increasingly developed and then weakened gradually. When x increases to 2.0, TixZr2.5-xTa with the best mechanical properties can be ob-tained. The yield strength, compressive strength and fracture strain of Ti2.0Zr0.5Ta reached 883 MPa, 1568 MPa and 34.58%, respectively. The dependence of the phase transformation and me-chanical properties confirms that improving the properties of Zr-Ta alloys by doping Ti is feasible.In this paper we study an impulsive delayed reaction-diffusion model applied in biology. The introduced model generalizes existing reaction-diffusion delayed epidemic models to the impulsive case. The integral manifolds notion has been introduced to the model under consideration. This notion extends the single state notion and has important applications in the study of multi-stable systems. By means of an extension of the Lyapunov method integral manifolds' existence, results are established. Based on the Lyapunov functions technique combined with a Poincarè-type inequality qualitative criteria related to boundedness, permanence, and stability of the integral manifolds are also presented. N6methyladenosine The application of the proposed impulsive control model is closely related to a most important problems in the mathematical biology-the problem of optimal control of epidemic models. The considered impulsive effects can be used by epidemiologists as a very effective therapy control strategy. In addition, since the integral manifolds approach is relevant in various contexts, our results can be applied in the qualitative investigations of many problems in the epidemiology of diverse interest.