Full-text resources of PSJD and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl
Preferences help
enabled [disable] Abstract
Number of results

Results found: 31

Number of results on page
first rewind previous Page / 2 next fast forward last

Search results

Search:
in the keywords:  05.45.Tp
help Sort By:

help Limit search:
first rewind previous Page / 2 next fast forward last
EN
A new algorithm of the analysis of correlation among economy time series is proposed. The algorithm is based on the power law classification scheme followed by the analysis of the network on the percolation threshold. The algorithm was applied to the analysis of correlations among gross domestic product per capita time series of 19 most developed countries in the periods (1982, 2011), (1992, 2011) and (2002, 2011). The representative countries with respect to strength of correlation, convergence of time series and stability of correlation are distinguished. The results are compared with ultrametric distance matrix analysed by network on the percolation threshold.
Acta Physica Polonica A
|
2015
|
vol. 127
|
issue 3A
A-103-A-107
EN
Cross-correlations among the chosen six main world financial markets are analysed by power law classification scheme (PLCS). The markets are represented by indices: DAX (Frankfurt), FTSE (London), S&P 500 (New York), HSI (Honkong), Nikkei 225 (Tokyo), STI (Singapore) in the interval from 24.09.1991 till 31.01.2014. The time series are transformed into daily returns and normalised daily range of indices. The evolution of correlation strength is analysed using moving time window. It is shown that the correlation strength properly characterises crisis and prosperity periods. Moreover, the value of the correlation strength can be related to the crisis severity. The results are compared with standard ultrametric distance based on Pearson coefficient.
EN
The analysis of crisis influence on the cross-correlation of the foreign exchange market (forex) daily exchange rates time series is presented. The analysis was conducted on 42 exchange rates with PLN as the common base currency. The time series cover the period from 09.10.2007 till 08.08.2015. Cross-correlation of the time series was analysed by power law classification scheme. It was shown that the strength of correlation allows not only to properly distinguish crisis and prosperity periods but also, followed by network analysis, is capable of recognizing the nodes which are the source of the crisis.
EN
We present experimental and numerical studies for level statistics in incomplete spectra obtained with microwave networks simulating quantum chaotic graphs with broken time reversal symmetry. We demonstrate that, if resonance frequencies are randomly removed from the spectra, the experimental results for the nearest-neighbor spacing distribution, the spectral rigidity and the average power spectrum are in good agreement with theoretical predictions for incomplete sequences of levels of systems with broken time reversal symmetry.
5
80%
EN
We apply the Zipf power law to financial time series of WIG20 index daily changes (open-close values). Thanks to the mapping of time series signal into the sequence of 2k+1 'spin-like' states, where k=0, 1/2, 1, 3/2, ..., we are able to describe any time series increments, with almost arbitrary accuracy, as the one of such 'spin-like' states. This procedure leads in the simplest non-trivial case (k = 1/2) to the binary data projection. More sophisticated projections are also possible and mentioned in the article. The introduced formalism allows then to use Zipf power law to describe the intrinsic structure of time series. The fast algorithm for this implementation was constructed by us within Matlab^{TM} software. The method, called Zipf strategy, is then applied in the simplest case k = 1/2 to WIG 20 open and close daily data to make short-term predictions for forthcoming index changes. The results of forecast effectiveness are presented with respect to different time window sizes and partition divisions (word lengths in Zipf language). Finally, the various investment strategies improving return of investment (ROI) for WIG20 futures are proposed. We show that the Zipf strategy is the appropriate and very effective tool to make short-term predictions and therefore, to evaluate short-term investments on the basis of historical stock index data. Our findings support also the existence of long memory in financial data, exceeding the known in the literature 3 days span limit.
EN
Standard analysis of correlations between companies consists of two stages: calculating the distance matrix and construction of a chosen graph structure. In the paper the most often used Ultrametric Distance (UD) is compared with the Manhattan Distance (MD). It is showed that MD allows to investigate a broader class of correlation and is more robust to the noise influence. Therefore MD was used to construct entropy distance, which is applied to the analysis of correlation between subset of WIG20 and S&P500 companies. In the analysis three network structures were used: minimum spanning tree and unidirectional and bidirectional minimum length path. The results are compared to the standard UD based analysis. The advantages and disadvantages of the analysed time series distances are outlined.
7
Content available remote

Evacuation in the Social Force Model is not Stationary

80%
EN
An evacuation process is simulated within the Social Force Model. Thousand pedestrians are leaving a room by one exit. We investigate the stationarity of the distribution of time lags between instants when two successive pedestrians cross the exit. The exponential tail of the distribution is shown to gradually vanish. Taking fluctuations apart, the time lags decrease in time till there are only about 50 pedestrians in the room, then they start to increase. This suggests that at the last stage the flow is laminar. In the first stage, clogging events slow the evacuation down. As they are more likely for larger crowds, the flow is not stationary. The data are investigated with detrended fluctuation analysis and return interval statistics, and no pattern transition is found between the stages of the process.
8
Content available remote

Effect of Detrending on Multifractal Characteristics

80%
EN
Different variants of multifractal detrended fluctuation analysis technique are applied in order to investigate various (artificial and real-world) time series. Our analysis shows that the calculated singularity spectra are very sensitive to the order of the detrending polynomial used within the multifractal detrended fluctuation analysis method. The relation between the width of the multifractal spectrum (as well as the Hurst exponent) and the order of the polynomial used in calculation is evident. Furthermore, type of this relation itself depends on the kind of analyzed signal. Therefore, such an analysis can give us some extra information about the correlative structure of the time series being studied.
EN
A large amount of stock prices intraday data allow us to create a summary of subsequent movements' proportions of the collected share prices in the form of histogram. We have created two kinds of histograms: one for proportions of subsequent increasing and decreasing price movements and the second for proportions of subsequent price movements in the same direction. We have also created the same kinds of histograms for duration of price movements. All the histograms quite well fit the gamma probability distribution. The distribution coefficients' values ν and λ for price are above 1, for time are below 1. Some proportions of price movements occur more frequently than others, creating peaks on the graph. Similar regularity occurs for the time factor. This property is often used in trading.
EN
We make the comparative study of scaling range properties for detrended fluctuation analysis (DFA), detrended moving average analysis (DMA) and recently proposed new technique called modified detrended moving average analysis (MDMA). Basic properties of scaling ranges for these techniques are reviewed. The efficiency and exactness of all three methods towards proper determination of scaling Hurst exponent H is discussed, particularly for short series of uncorrelated and persistent data.
EN
The proposed article presents a new approach to analyze the relationships between financial instruments. We use blind signal separation methods to decompose time series into the core components. The components common to the various instruments provide broad set of characteristics to describe the internal morphology of the time series. In this research a modified and extended version of AMUSE algorithm is used. The concept is presented based on real financial instruments.
12
Content available remote

Agent-Based Modelling of a Commodity Market Dynamics

80%
EN
A modification of Yasutomi's agent-based model of the commodity market is investigated. It is argued that introduced modification of the microscopic exchange rules allows for emergence of commodity exchange rates in the model. Moreover, the model scaling due to finite size effects is considered and some practical implications of such scaling are discussed.
13
Content available remote

Modelling Emergence of Money

80%
EN
The agent-based computational economic (ACE) model with one free parameter (Thresh) proposed by Yasutomi is analyzed in details. We have found that for a narrow range of the parameter, in the money emergence phase, the money lifetime is finite and the "money switching" effect can be observed for long enough time evolution. Long periods of stability are followed by shorter periods with much shorter money lifetimes. Distributions of the money switching points have been found to have non-Cantor distribution on the time axis, i.e. the Rényi exponents determined by the box-counting algorithm equal 1.0 with high accuracy.
EN
In this work we analyze empirically customer churn problem from a physical point of view to provide objective, data driven and significant answers to support decision making process in business application. In particular, we explore different entropy measures applied to decision trees and assess their performance from the business perspective using set of model quality measures often used in business practice. Additionally, the decision trees are compared with logistic regression and two machine learning methods - neural networks and support vector machines.
EN
In this model study of the commodity market, we present some evidence of competition of commodities for the status of money in the regime of parameters, where emergence of money is possible. The competition reveals itself as a rivalry of a few (typically two) dominant commodities, which take the status of money in turn.
16
Content available remote

Q-Entropy Approach to Selecting High Income Households

80%
EN
A generalized algorithm for building classification trees, based on Tsallis q-entropy, is proposed and applied to classification of Polish households with respect to their incomes. Data for 2008 are used. Quality measures for obtained trees are compared for different values of q parameter. A method of choosing the optimum tree is elaborated.
EN
We use methods of non-extensive statistical physics to describe quantitatively the memory effect involved in returns of companies from WIG 30 index on the Warsaw Stock Exchange. The entropic approach based on the generalization of the Boltzmann-Gibbs entropy to non-additive Tsallis q-entropy is applied to fit fat tailed distribution of returns to q-normal (Tsallis) distribution. The existence of long term memory effects in price returns generated by two-point autocorrelations are checked via calculation of the Hurst exponent within detrended fluctuation analysis approach. The results are collected for diversified frequency of data sampling. We confirm the perfect inverse cubic power law for low time-lags (≈1 min) of returns for the main WIG 30 index as well as for the most of separate stocks, however this relationship does not hold for longer time-lags. The particular emphasis is given to a study of an independent fit of probability distribution of positive and negative returns to q-normal distribution. We discuss in this context the asymmetry between tails in terms of the Tsallis parameters q^{±}. A qualitative and quantitative relationship between the frequency of data sampling, the parameters q and q^{±}, and the corresponding main Hurst exponent H is provided to analyze the effect of memory in data caused by linear and nonlinear autocorrelations. A new quantifier based on asymmetry of the Tsallis index instead of skewness of distribution is proposed which we believe is able to describe the stage of market development and its robustness to speculation.
EN
In this paper we present a novel similarity measure method for financial data. In our approach, we propose the assessment of the similarity in a coherent hierarchical and multi-faceted way, following the general scheme where various detailed basic measures may be used like the Fermi-Dirac divergence, Bose-Einstein divergence, or our new smoothness measure. The presented method is tested on benchmark and real stock markets data.
EN
We propose two novel methodological approaches - the detrending moving average based regression coefficient estimator and the scale-dependent instrumental variable estimator - and show their utility on a specific case of dependence between stock markets and connected foreign exchange rates in the Central European region - the Czech Republic, Hungary, and Poland. The methodology has proven useful as we uncovered several interesting findings such as scale dependence of the shock transmission and differences between the Euro and U.S. dollar currency pairs. The Polish currency is also the most sensitive of the three with respect to the stock market shocks. The proposed methodology can be applied to any system with potential endogeneity issues if one is interested in the scale variability of the effect of interest.
EN
The article presents independent component analysis (ICA) applied to the concept of ensemble predictors. The use of ICA decomposition enables to extract components with particular statistical properties that can be interpreted as destructive or constructive for the prediction. Such process can be treated as noise filtration from multivariate observation data, in which observed data consist prediction results. As a consequence of the ICA multivariate approach, the final results are combination of the primary models, what can be interpreted as aggregation step. The key issue of the presented method is the identification of noise components. For this purpose, a new method for evaluating the randomness of the signals was developed. The experimental results show that presented approach is effective for ensemble prediction taking into account different prediction criteria and even small set of models.
first rewind previous Page / 2 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.