Работа с базами данных
<<  Resource Management in Virtualization-based Data Centers Рисуем лето 3 класс  >>
Novel approaches of data-mining in experimental physics
Novel approaches of data-mining in experimental physics
Data mining peculiarities for experimental high energy physics (HEP)
Data mining peculiarities for experimental high energy physics (HEP)
2. Transition radiation detector (TRD)
2. Transition radiation detector (TRD)
Example 3: the OPERA experiment
Example 3: the OPERA experiment
Example 3: the OPERA experiment
Example 3: the OPERA experiment
Example 3: the OPERA experiment
Example 3: the OPERA experiment
the OPERA experiment: Search for neutrino oscillations (OPERA is
the OPERA experiment: Search for neutrino oscillations (OPERA is
NN application examples: 1. RICH detector
NN application examples: 1. RICH detector
NN application examples: 1. RICH detector
NN application examples: 1. RICH detector
NN application examples: 1. RICH detector
NN application examples: 1. RICH detector
NN application examples: 1. RICH detector
NN application examples: 1. RICH detector
NN for the RICH detector
NN for the RICH detector
NN for the RICH detector
NN for the RICH detector
NN for the RICH detector
NN for the RICH detector
NN application examples: 2. e- /
NN application examples: 2. e- /
NN application examples: 3. OPERA experiment
NN application examples: 3. OPERA experiment
Recurrent ANNs and applications
Recurrent ANNs and applications
Our innovations (I
Our innovations (I
Our innovations (I
Our innovations (I
Elastic neural networks
Elastic neural networks
Elastic neural networks
Elastic neural networks
M-estimate formalizm
M-estimate formalizm
How to choose the weight function w(
How to choose the weight function w(
How to choose the weight function w(
How to choose the weight function w(
Application examples: 1. Determination of the interaction vertex
Application examples: 1. Determination of the interaction vertex
Application examples: 1. Determination of the interaction vertex
Application examples: 1. Determination of the interaction vertex
Application examples: 1. Determination of the interaction vertex
Application examples: 1. Determination of the interaction vertex
Application examples: 2.TDC calibration problem (HERA-B)
Application examples: 2.TDC calibration problem (HERA-B)
Some retrospections
Some retrospections
Some retrospections
Some retrospections
Wavelets can be applied for extracting very special features of mixed
Wavelets can be applied for extracting very special features of mixed
Wavelets can be applied for extracting very special features of mixed
Wavelets can be applied for extracting very special features of mixed
PRO: - Using wavelets we overcome background estimation - Wavelets are
PRO: - Using wavelets we overcome background estimation - Wavelets are
NEW: Back to continuous wavelets
NEW: Back to continuous wavelets
Estimating peak parameters in G2 wavelet domain
Estimating peak parameters in G2 wavelet domain
Estimating peak parameters in G2 wavelet domain
Estimating peak parameters in G2 wavelet domain
Estimating peak parameters in G2 wavelet domain
Estimating peak parameters in G2 wavelet domain
Application results to CBM invariant mass spectra
Application results to CBM invariant mass spectra
NEW: Example with set of FOPI data
NEW: Example with set of FOPI data
NEW: Example with set of FOPI data
NEW: Example with set of FOPI data
Clustering in data mining
Clustering in data mining
Картинки из презентации «Урок по тысяча 2 класс петерсон» к уроку информатики на тему «Работа с базами данных»

Автор: Gena. Чтобы познакомиться с картинкой полного размера, нажмите на её эскиз. Чтобы можно было использовать все картинки для урока информатики, скачайте бесплатно презентацию «Урок по тысяча 2 класс петерсон.ppt» со всеми картинками в zip-архиве размером 9642 КБ.

Урок по тысяча 2 класс петерсон

содержание презентации «Урок по тысяча 2 класс петерсон.ppt»
Сл Текст Сл Текст
1Novel approaches of data-mining in 19(elastic arm) formed by equations of
experimental physics. XXIV International particle motion are all bended in order to
Symposium on Nuclear Electronics & overlaid the data from the detector. A
Computing Varna, 09-16 September, 2013. routine then has to evaluate whether or
G.A.Ososkov, Laboratory of Information not the template matched a track. Ohlsson
Technologies Joint Intstitute for Nuclear and Peterson (O&P, 1992 ) from the
Research, 141980 Dubna, Russia email: Lund University realized this idea as a
ososkov@jinr.ru special Hopfield net with the energy
http://www.jinr.ru/~ososkov. 11/5/2015. 1. function depending from helix parameters
2Data mining concept. Classical describing a track and binary neurons Sia,
approaches for processing experimental each of them is equal to 1 or 0 when i-th
data supposed to have a model describing a point belongs or not to the a-th track,
physical system and based on the advanced respectively. Gyulassy and Harlander (G
physical theory. Then observed data are & H, 1991) proposed their elastic
used to verify the underlying models and tracking that can physically be described
to estimate its direct or indirect as interaction between the positively
parameters. Now, when the experimental charged template and negatively charged
data stream is terabytes/sec, we come to spatial points measured in the track. The
the BIG DATA era, having often a lack of better the elastic template fits points,
the corresponding theory, our data the lower the energy of their interaction.
handling paradigm shifts from classical Using Lorenz potential with the
modeling and data analyses to developing time-dependent width where a is the
models and the corresponding analyses maximal distance, at which points are
directly from data. (Data-driven detector still accredited to this template,
alignment as an example) The entire b<< a is spatial resolution of a
process of applying a computer-based detector, G & H obtained the energy to
methodology, including new techniques, for be minimized by helix parameters ?
discovering knowledge from data is called 11/5/2015. 19.
data mining. Wikipedia: “-it is the 20Elastic neural networks applications.
analysis of large amounts of data about To avoid E(?,t) getting caught in local
experimental results held on a computer in spurious minima the simulated annealing
order to get information about them that iterative procedure is applied. On the
is not immediately available or obvious.”. first iteration w(t) is taken for the
11/5/2015. 2. highest temperature, when E(?,t) has the
3Data mining methods. It is the process only one minimum. Then w(t) is narrowed
of extracting patterns from large data gradually allowing more and more accurate
sets by combining methods from statistics search of the global minimum. G&H
and artificial intelligence with database elastic tracking was applied for the STAR
management. Data mining commonly involves TPC simulated data with remarkably high
four classes of tasks: Association rule track-finding efficiency (1998) O&P
learning – searches for relationships elastic NNs after corresponding
between variables. Clustering – is the modifications were succesfully applied for
task of discovering groups and structures Cherenkov ring search and track
in the data that are in some way or reconstructing (1997). Drift chamber
another "similar", without using tracks with their left-right ambiguity in
known structures in the data. magnetic field demanded to invent 2D
Classification – is the task of neurons Si=(si+, si —) to determine a
generalizing known structure to apply to point to a track accreditation (1998)
new data. Regression – attempts to find a Important to note: a homogeneous magnetic
function which models the data with the field of NICA-MPD project will make it
least error Although Data Mining Methods possible to apply this elastic arm
(DMM) are oriented mostly on mining approach for MPD TPC tracking. 11/5/2015.
business and social science data, in 20.
recent years, data mining has been used 21Some retrospections. 2. Robust
widely in the areas of science and estimates for heavy contaminated samples
engineering, such as bioinformatics, Why robust estimates? In all preceeding
genetics, medicine, education and experimental examples we must solve
electrical power engineering. So a great typical statistics problems of parameter
volume of DMM software is now exists as estimations by sets of measured data.
open-source and commercial. However one However we faced with not usual applied
would not find experimental physics in DMM statistics, but with special mass
application domains. Therefore we are production statistics Keywords are: heavy
going to understand how DMM would look data contamination due to noisy
like, if data will be taken from high measurements; measurements from neighbour
energy physics? 11/5/2015. 3. objects. need in very fast algorithms of
4Data mining peculiarities for hypothesis testing and parameter
experimental high energy physics (HEP). estimating Comparison of LSF and robust
Let us consider some examples. 1. СВМ fit in case of one point outlier How to
experiment (germany, GSI, to be running in achieve that? - Robust approach, based on
2018) 107 events per sec, ~1000 tracks per functional weights of each measuremet,
event ~100 numbers per track total: preferably parallel algorithms. 11/5/2015.
terabytes/sec ! RICH - cherenkov radiation 21.
detector. Our problem is to recognize all 22M-estimate formalizm. Instead of LSF
of these rings and evaluate their with its crucial assumption of residual
parameters despite of their overlapping, normality and quadratic nature of
noise and optical shape distortions. minimized functional we consider P.Huber’s
Condensed Barion Matter. TRD. RICH. M-estimate, i.e. replace quadratic
simulated event of the central Au+Au functional S(p) to be minimized by
collision in the vertex detector. L(p,?)=?i ?(?i ), where measurement error
Schematic view of the СВМ setup. view of ? is distributed according to J.Tukey's
Cherenkov radiation rings registered by gross-error model f(?) = (1-c) ?(?) + c
the CBM RICH detector. 11/5/2015. 4. h(?), c is a parameter of contamination,
52. Transition radiation detector ?(?) is the Gauss distribution and h(?)is
(TRD). Both distributions were simulated some long-tailed noise distribution
by special program GEANT-4 taking into density. Likelihood equation for the
account all details of the experimental functional L(p,?) by denoting can be
setup and corresponding physical modified to the form which is similar to
assumptions related to heavy ion the normal LSF equations, but with
collisions. However the test based on replacement of the numerical weight
direct cut on the sum of energy losses coefficients to weight functions w(?) to
could not satisfy these requirements be recalculated on each step of an
because both EL and TR have long-tailed iterative procedure. 11/5/2015. 22.
Landau distributions. The main lesson: a 23How to choose the weight function w(?)
transformation needed to reduce Landau ? For a particular, but important case of
tails of EL. TRD measurements allow for the uniform contamination h(?)=h0 we found
each particle to reconstruct its 3D track the optimal weights w(?) which polynomial
and calculate its energy loss (EL) during expansion of up to the fourth order leads
its passage through all 12 TRD stations in to the approximation. It is the famous
order to distinguish electrons e- from Tukey's bi-weights, which are easier to
pions ?± . Unlike ?±, electrons generate calculate than optimal ones. Simulated
additionally the transition radiation (TR) annealing procedure is used to avoid
in TRD. Our problem is to use the sticking functional in local minima.
distributions of EL+TR for e- and ?± in Recall the energy function of G&H
order to test hypothesis about a particle elastic tracking Lorentz potential in this
attributing to one of these alternatives sum plays a role of the robust functional
keeping the probability ? of the 1st kind weight. 11/5/2015. 23.
of error on the fixed level ? =0.1 and the 24Application examples: 1. Determination
probability ? of the 2nd kind of error on of the interaction vertex position for
the level less than ?< 0.004. TR only two coordinate planes (NA-45). One of
Production. 11/5/2015. 5. two silicon disk with 1000 track and noise
6Example 3: the OPERA experiment. LNGS: hits. So, it is impossible to recognize
the world largest underground physics individual tracks. Tukey biweight function
laboratory. Neutrino beam. Search for with cT=3 was used. Iterational procedure
neutrino oscillations. 1600 m in depth converged in five iterations with the
~100’000 m3 caverns’ volume. 11/5/2015. 6. initial approximation taken as the middle
6. A. Ereditato - LNGS - 31 May 2010. of Z-axis target region. The results after
7the OPERA experiment: Search for processing 4000 Pb+Au events provide
neutrino oscillations (OPERA is running). satisfactory accuracy of 300 ? along
BSP. Each wall is accompanied by two Z-axis and good local accuracy of a track.
planes of electronic trackers made of The target consists of eight 25-? gold
scintillator strips The crucial issue in discs. 11/5/2015. 24.
OPERA is finding of that particular brick 25Application examples: 2.TDC
where the neutrino interaction takes calibration problem (HERA-B). A lot of
place. Tracks formed by scintillator hits more applications were reported, in
should originate from a single point - particular, for tracking in presence of
vertex. However the main obstacle is ?-electrons in CMS muon endcup. It is
back-scattered particles (BSP) occuring in caused by the fact that real track
50% of events, which do not contain useful detectors, as drift chambers, for example,
information. Emulsion scanning to are measuring the drift time in TDC
determine neutrino oscillation – is the (Time-Digital Converter) counts. So to
separate task out of this talk. Real perform data processing, TDC counts are to
vertex. Two types of OPERA events with be transferred, first of all, into drift
BSP. Hadron shower axix. 11/5/2015. 7. radii. Such a transformation named
8The particular features of the data calibration is inevitably data-driven,
from these detectors are as follows: data i.e. is carried out statistically from
arrive with extremely high rate; real TDC data of some current physical
recognized patterns are discrete and have run. Here is an impressive example of the
complex texture; very high multiplicity of effectiveness of the robust approach. The
objects (tracks, Cherenkov radiation fitting problem in such cases is radically
rings, showers) to be recognized in each different from any common one, since. for
event; the number of background events, every abscissa we have not one, but many
which are similar to “good” events, is ordinates with different amplitudes.
larger than the number of the latter Therefore every point to be fitted was
events by several orders of magnitude; provided by 2D weight depending as of this
noise counts are numerous and correlated. point distance to the fitted curve, as of
——————————————————————————————--—————- The its amplitude. It is shown how a
basic requirements to data processing in calibration function r(t) can be obtained
current experiments are: maximum speed of by fitting cubic splines to directly 2D
computing in combination with the highest histogram of drift radii versus TDC
attainable accuracy and high efficiency of counts, which consists of many thousand
methods of estimating physical parameters bins with various amplitudes. The fitted
interesting for experimentalists. spline only for upper part is shown.
11/5/2015. 8. 11/5/2015. 25.
9Data mining in experimental HEP 1. To 26Some retrospections. 3.Wavelet
understand the need for analyses of large, analysis What are continuous wavelets? In
complex, information-rich data sets in HEP contrast to the most known mean of signal
let us start from considering stages of analysis as Fourier transform,
HEP data processing 1. Pre-processing is one-dimensional wavelet transform (WT) of
very important stage. It includes: Data the signal f(x) has 2D form , where the
acquisition: before data mining algorithms function ? is the wavelet, b is a
can be used, a target data set must be displacement (time shift), and a is a
assembled and converted from the rough scale (or frequency). Condition C? < ?
format of detector counters into natural guarantees the existence of ? and the
unit format. Data Transformation: to wavelet inverse transform. Due to the
transform data into forms appropriate for freedom in ? choice, many different
mining, they must be corrected from wavelets were invented. The family of
detector distortions and misalignment by continuous wavelets with vanishing momenta
special calibration and alignment is presented here by Gaussian wavelets,
transformation procedures. Data selection: which are generated by derivatives of
then data must be cleaned to remove noisy, Gaussian function. Most known wavelet G2
inconsistent and other observations, which is named “the Mexican hat”. The
do not satisfy acceptance conditions. It biparametric nature of wavelets renders it
can be accomplished in a special often possible to analyze simultaneously both
quite sophisticate triggering procedure time and frequency characteristics of
that usually causes a significant signals. 11/5/2015. 26.
reduction of target data (several orders 27Wavelets can be applied for extracting
of magnitude). 11/5/2015. 9. very special features of mixed and
10Data mining in experimental HEP 2. 2. contaminated signal. G2 wavelet spectrum
HEP data processing involves following of this signal. An example of the signal
stages and methods: Pattern recognition: with a localized high frequency part and
hit detection, tracking, vertex finding, considerable contamination. then wavelet
revealing Cherenkov rings , fake objects filtering is applied. Filtering works in
removing etc employing the following the wavelet domain by thresholding of
methods: Cluster analysis Hough transform scales, to be eliminated or extracted, and
Kalman filter Neural networks Cellular then by making the inverse transform.
automata Wavelet analysis Physical Filtering results. Noise is removed and
parameters estimation - robust high frequency part perfectly localized.
M-estimations Hypothesis testing - NOTE: that is impossible by Fourier
Likelihood ratio test - Neural network transform. 11/5/2015. 27.
approach - Boosted decision trees. The 28PRO: - Using wavelets we overcome
next expounding will be some background estimation - Wavelets are
retrospections of the JINR experience to resistant to noise (robust) CONTRA: -
illustrate HEP data processing steps. redundancy ? slow speed of calculations -
11/5/2015. 10. nonorthogonality (signal distotres after
11Data mining in experimental HEP 3. inverse transform!) Besides, real signals
Monte-Carlo simulations are used on all to be analysed by computer are discrete,
stages and allow to - accomplish in in principle So orthogonal discrete
advance the experimental design of a wavelets should be preferable. Continuous
hardware setup and data mining algorithms wavelets: pro and contra. Denoising by DWT
and optimize them from money, materials shrinking wavelet shrinkage means, certain
and time point of view; - develop needed wavelet coefficients are reduced to zero:
software framework and test it; - optimize Our innovation is the adaptive shrinkage,
structure and needed equipment of planned i.e. ?k= 3?k where k is decomposition
detectors minimizing costs, timing with a level (k=scale1,...,scalen), ?k is RMS of
proposed efficiency and accuracy; - W? for this level (recall: sample size is
calculate in advance all needed 2n). Small peak finding with coiflets.
distributions or thresholds for 11/5/2015. 28.
goodness-of-fit tests; Parallel 29NEW: Back to continuous wavelets. Peak
programming of optimized algorithms is parameter estimating by gaussian wavelets
inevitable Software quality assurance When a signal is bell-shaped one, it can
(SQA) is the very important issue of any be approximated by a gaussian. Then it can
great programming system development GRID be derived analytically that its wavelet
technologies changed considerably HEP data transformation looks as the corresponding
processing stages, which now more and more wavelet with parameters depending of the
correspond to the GRID Tier hierarchy. original signal parameters. Thus, we can
Since each of theses items needs a long calculate them directly in the wavelet
separate expounding, they will be only domain instead of time/space domain. The
briefly noted below. 11/5/2015. 11. most remarkable point is, we do not need
12Some retrospections. Artificial Neural the inverse transform! . 11/5/2015. 29.
Networks Why ANN for a contemporary HEP 30Estimating peak parameters in G2
experiment? historically namely physicists wavelet domain. How it works? Let us have
wrote in 80-ties one of the first NN a noisy invariant mass spectrum transform
programing packages – Jetnet. They were it by G2 into wavelet domain 2. look for
also among the first neuro-chip users the wavelet surface maximum. bmax ,amax .
after being trained ANN is one of the most 3. From the formula for WG2(a,b;x0,?)g one
appropriate tools for implementing many of can derive analytical expressions for its
data handling tasks, while on the basis of maximum x0 and . which should correspond
some new physical model physicists have to the found bmax ,amax . Thus we can use
possibility to generate training samples coordinates of the maximum as estimations
of any arbitrary needed length by Monte of wanted peak parameters. 4. From them we
Carlo appearance of TMVA - Toolkit for can directly obtain halfwidth amplitude
Multivariate Data Analysis with ROOT Thus, and even the integral. peak has bell-shape
there are many real problems solved on the form. 11/5/2015. 30.
basis of ANN in experimental physics as - 31Application results to CBM invariant
Object recognizing and classifying - mass spectra. Low-mass dileptons (muon
Statistical hypothesis testing - Expert channel). ?. Gauss fit of reco signal
system implementing - Approximation of M=0.7785 ? =0.0125 A=1.8166 Ig=0.0569 ?.
many-dimensional functions - Solution of Wavelets M=0.7700 ? =0.0143 A=1.8430
non-linear differential equations - etc. Iw=0.0598. ?. - ?– wavelet spectrum.
11/5/2015. 12. Thanks to Anna Kiseleva. ?-meson. ?-meson.
13NN application examples: 1. RICH Even ?- and mesons have been visible in
detector. A fragment of photodetector the wavelet space, so we could extract
plane. In average there are 1200 points their parameters. 11/5/2015. 31.
per event forming 75 rings. Data 32NEW: Example with set of FOPI data.
processing stages: Ring recognition and provided by N.Hermann, GSI, Darmstadt,
their parameters evaluation; Compensating Germany Wavelets G4 are used. The formula
the optical distortions lead to elliptic for ? obtaining is ? = amax/3. Despite of
shapes of rings; Matching found rings with the very jagged spectrum wavelets give
tracks of particles which are interesting visible peaks with ?1 = ?3 = 0.013, ?2 =
to physicists Eliminating fake rings which 0.021. 2. 3. 1. noise level ? = 0.009.
could lead to wrong physical conclusions 11/5/2015. 32.
Accomplishing the particle identification 33Clustering in data mining. Clustering
with the fixed level of the ring is one of important DM task because it
recognition efficiency. A sketch of the allows to seek groups and structures in
RICH detector. Radius versus momentum for the data that are in some way
reconstructed rings. 11/5/2015. 13. "similar", without using known
14NN for the RICH detector. The study structures in the data. Clustering methods
has been made to select the most are widely used in HEP data processing to
informative ring features needed to find the point of particle passage through
distinguish between good and fake rings coordinate plane of some cell-structure
and to identify electrons. Ten of them detector New application of clustering
have been chosen to be input to ANNs, they analysis allows to develop the URQMD
are: Number of points in the found ring fragmentation model of nuclear collision
Its distance to the nearest track The at relativistic energies. Clusters or
biggest angle between two neighbouring nuclear fragments are generated via
points Number of points in the narrow dynamical forces between nucleons during
corridor surrounding ring Radial ring their evolution in coordinate and momentum
position on the photodetector plane ?2 of space New two steps clustering method is
ellipse fitting Both ellipse half-axes (A proposed for BIG DATA. It accomplishes the
and B) angle ? of the ellipse inclination quantization of input data by generating
to abscissa track azimuth track momentum. so-called Voronoi partition. The final
Two samples with 3000 e (+1)and 3000 ? clustering is done using any conventional
(-1) have been simulated to train NN. methods of clustering. A new promising
Electron recognition efficiency was fixed watershed clustering algorithm is
on 90% Probabilities of the 1-st kind proposed. 11/5/2015. 33.
error 0.018 and the 2-d kind errors 0.0004 34Parallel programming. Fortunately the
correspondingly were obtained. electrons. common structure of HEP experimental data
?-mesons. 40000 e and ? rings to train. naturally organized as the sequence of
11/5/2015. 14. events gives the possibility for the
15NN application examples: 2. e- / ?± natural multithread parallelism by
separation by transition radiation. Two handling events simultaneously on
ideas to avoid obstacles with the easy cut different processors. However the
test and long tails of energy loss (?E) requirements of such experiments as the
distributions: 1. Apply artificial neural CBM to handle terabytes of data per second
network for testing 2. Calculate leads to the necessity of parallelism on
likelihood ratio for ?E of each TRD the level of each event by so-called
station as input to ANN We use Monte-Carlo SIMDization of algorithms, that demands
calculations to simulate a representative their substantial optimization and
sample of TRD signals for given vectoring of input data. For instance, in
experimental conditions and then obtain case of CBM TRD and MuCh tracking
energy losses from all n TRD stations for algorithms we obtain Resulting speedup of
both e- and ?± , sort them and calculate the track fitter on the Computer with
probability density functions (PDF) for 2xCPUs Intel Core i7 (8 cores in total) at
ordered ?Es. Then we repeat simulation in 2.67 GHz. Throughput: 2*106 tracks/s. Time
order to train neural network with n [?s/track]. Speedup. Initial. 1200. -.
inputs and one output neuron, which should Optimization. 13. 92. SIMDization. 4.4. 3.
be equal +1 in case of electron and and -1 Multithreading. 0.5. 8.8. Final. 0.5.
in case of pion. As inputs, the likelihood 2400. 11/5/2015. 34.
ratios for each ?E were calculated. The 35Software quality assurance (SQA).
result of testing the trained neural Since software framework of any
network gave the probability of the 2nd contemporary HEP experiment is developed
kind of error ?= 0.002 It satisfied the by efforts of international team from
required experimental demands. It is thousand collaborants with various
interesting to note: Applying Busted programming skills, software components,
decision Trees algorithm from TMVA allows they wrote, can inevitably have bags,
to improve pion suppression result up to interconnection errors or output result
15-20% comparing to NN. ANN output different from specified before. Therefore
distribution. 11/5/2015. 15. automation of experimental framework
16NN application examples: 3. OPERA software testing is needed to provide the
experiment. According to 3 classes of following: - More reliable software,
events 3 neural networks of MLP type were speedup of its development - Reduce
then trained for each class on 20000 development cycles - Continues integration
simulated events to make a decision about and deployment - High code coverage to
the wall with the event vertex. The wall test, ideally, all code in the repository
finding efficiency on the level of 80 – - Not only unit testing but also system
90% was then calculated by testing 10000 test for simulation and reconstruction
events. NN results were then used in the However known SQA systems could not be
brick finding procedure. To facilitate the applied directly for these purposes,since
vertex location a considerable data they are based on the theory of
preprocessing has been fulfilled in order reliability methods and suppose to have a
to eliminate or, at least, reduce highly qualified team of programmers and
electronic noise. The method was based on testing with immediate failure repairing,
cellular automaton that rejects points while the most software in our
having no nearest neighbours; Reconstruct experimental collaborations are written by
muon tracks (Hough transform, Kalman physicists who are not highly qualified in
filter) M-estimate hadron shower axis with programming and are not able to watch over
2D robust weights taking into account not immediate failure repairing For discussion
only distance of a point to the shower an automatic test system for HEP
axis, but also amplitudes of scintillator experiment should perform: Report
signals make a study to determine 15 generation for simulation studies;
parameters to input them to ANN. . Automatic check of output results based on
11/5/2015. 16. predefined values; Nightly monitoring of
17Recurrent ANNs and applications. the simulation results; Designed to be
Hopfield’s theorem: the energy function modular in order to easy extend and add
E(s) = - ? ?ij si wij sj of a recurrent NN new histograms. 11/5/2015. 35.
with the symmetrical weight matrix wij = 36Conclusion and outlook remarks.
wji , wii = 0 has local minima Importance of advanced Monte-Carlo
corresponding to NN stability points simulations Robust estimates, neural
Applications in JINR 1. Track recognition networks and wavelet applications are
by Denby- Peterson (1988) segment model really significant for data-mining in HEP
with modifications was successfully used It looks reasonable to provide wavelet
for tracking in the EXCHARM experiment 2. analysis tools in ROOT The focus of
More rare: track recognition by rotor developing data mining algorithms in HEP
models of Hopfield networks. The energy is shifted to their optimization and
function: the first term forces parallelization in order to speed them up
neighbouring rotors to be close to each considerably while keeping their
other. The second term is in charge of the efficiency and accuracy Parallelism is to
same between rotors and track-segments. be introduced inevitably on the basis of
11/5/2015. 17. new technologies of computing and software
18Our innovations (I.Kisel, 1992). Software reliability concept is very
Analysis of ionograms. Up to now the essential Distributed or cloud computing
corresponding program is in use in the are growing. In HEP it is accomplished by
Irkutsk Institute of the terrestrial GRID technologies. 11/5/2015. 36.
magnetism, Russia and in the Lowell 37Thanks for your attention! 37.
University, MA, USA. Therefore we obtain a 11/5/2015. 37.
simple energy function without any 38SQA example for FAIR GSI. SQA general
constrains. This approach has been applied structure Histogram Creator. It realizes
in the ARES experiment with some extra the management of large number of
efforts: - prefiltering by cellular histograms 2. Drawer. Feature extractor.
automaton; - local Hough algorithm for Report generator. Result checker. They
initial rotor set up; - special robust provide - Base classes for simulation and
multipliers for synaptic weights. Results: study report generation; - Base
recognition efficiency - 98%. Data from functionality for histogram drawing; -
the vertical sounding of the ionosphere. Base functionality for
11/5/2015. 18. serializing/deserializing images to/from
19Elastic neural networks. ANN drawbacks XML/JSON - Report in HTML, text, Latex 3.
revealed by physicists in many HEP SQA monitoring (SQAM) Its features allow
applications: ? too slow convergence of users to easy increase number of tests for
the ANN evolution due to too high degrees different collision systems, energies,
of freedom; ? only recognition is detector geometries etc SQAM provides: -
fulfilled without taking into account the Automatic testing of simulation,
known track model; ? over-sensitivity of reconstruction and analysis - Automatic
ANNs to noise is indicated. Therefore it check of simulation results QAM current
was suggested to combine both stages: status: About 30 tests run nightly.
recognition and fitting of a track in one 11/5/2015. 38.
procedure when deformable templates 3911/5/2015. 39.
Урок по тысяча 2 класс петерсон.ppt
cсылка на страницу

Урок по тысяча 2 класс петерсон

другие презентации на тему «Урок по тысяча 2 класс петерсон»

«Data Mining» - Продолжение. Докладчики. Оценка параметра k методом кросс-проверки. Статистические методы. Визуализация инструментов Data Mining. Data Mining. Литература по Data Mining. Решение задачи прогнозирования. Задачи Data Mining. Александра Симонова, Мат-Мех, 5 курс. Алгоритмы. Мультидисциплинарность. Дескриптивный анализ и описание исходных данных.

«Запросы к данным» - Оператор. Полное резервирование. Фамилия. Transform. Creat user. Select. Перекрестная таблица. Файл журнала. Запросы манипулирования данными. Сервер. Регистрация пользователя. Запрос на обновление. Резервное копирование. Объект в SQL. Запрос на объединение. Запрос на добавление. Запрос на создание новой таблицы.

«Язык запросов SQL» - Бинарные односхемные операции РА. Список сотрудников с указанием количества проектов. Агрегирующие функции. Операция объединения. Базы данных. Операции реляционной алгебры. Разность отношений. Бинарные операции реляционной алгебры. Форматирование результатов. Пересечение отношений. Примеры использования агрегирующих функций.

«Резервное копирование базы данных» - Далее, рассмотрим как создаётся бекап в ручную. И нажимаем кнопку Готово. Указываем, что не нужно подключаться к базе данных. Вызываем окно создания зеркала. Еще один способ создания резервной копии в реальном времени. Процесс записи бекапа. В результате, получим не предсказуемые ошибки. Файл базы данных получается достаточно большим. 3. Аппаратное резервирование.

«Практические работы по базам данных» - Построение схемы данных. СУБД MS Access. 4. Формы представления баз данных. Запрос-выборка. 5. Создание запросов. Цель работы: обучение приемам построения связей между таблицами. Сохранить запросы. Информационные системы и базы данных. Сформировать условие отбора. Требования к уровню подготовки. Конструктор запросов. 6. Создание отчетов.

«Большие объекты» - Упражнения. Дескриптор поля большого размера. Современные СУБД. Размер хранимого поля. Создание поля большого размера. Организация памяти. Добавление в конец. Большие объекты. Запись. Удалить N байт. Рисунок. Двухуровневое разбиение. Алгоритм. Улучшенное двухуровневое разбиение. Вставка. Выделение места.

Работа с базами данных

11 презентаций о работе с базами данных


130 тем