Preprint
Concept Paper

Finding a Published Which Meaningfully Averages The Most Pathalogical Functions

Altmetrics

Downloads

87

Views

70

Comments

0

This version is not peer-reviewed

Submitted:

03 September 2024

Posted:

05 September 2024

Read the latest preprint version here

Alerts
Abstract
I want to meaningfully average a pathalogical function (i.e., an everywhere surjective function whose graph has zero Hausdorff measure in its dimension). We do this by taking the most generalized, satisfying extension of the expected value, w.r.t the Hausdorff measure in its dimension, on bounded functions which takes finite values only. As of now, I'm unable to solve this due to limited knowledge of advanced math and most people are too busy to help. Therefore, I'm wondering if anyone knows a research paper which solves my doubts.
Keywords: 
Subject: Computer Science and Mathematics  -   Mathematics

1. Introduction

Let n N and suppose function f : A R n R , where A and f are Borel. Let dim H ( · ) be the Hausdorff dimension, where H dim H ( · ) ( · ) is the Hausdorff measure in its dimension on the Borel σ -algebra.

1.1. Special Case of f

If the graph of f is G, is there an explicit f where:
(1)
The function f is everywhere surjective [1]
(2)
H dim H ( G ) ( G ) = 0
We denote this special case of f as F explaining, in the next section, why F is pathalogical.

1.2. Attempting to Analyze/Average F

Suppose, the expected value of f is:
E [ f ] = 1 H dim H ( A ) ( A ) A f d H dim H ( A )
Note, explicit F is pathological, since it’s everywhere surjective and difficult to meaningfully average (i.e., the most generalized, satisfying (§Section 2) extension of E [ F ] is non-finite).
Thus, we want the most generalized, satisfying extension of E [ f ] on bounded f, where the extension takes finite values for all F. Moreover, suppose:
(1)
The sequence of bounded functions is f s = ( f r s ( s ) ) r s N
(2)
The sequence of bounded functions converges to f: i.e., f r s ( s ) f
(3)
The generalized, satisfying extension of E [ f ] is E [ f r s ( s ) ] : i.e., there exists a s N , where E [ f r s ( s ) ] is finite
(4)
There exists k , v N where the expected value of f k and f v are finite and non-equivelant: i.e.,
< E [ f r k ( k ) ] E [ f r v ( v ) ] < +
(Whenever (4) is true, (3) is non-unique.)
Therefore, since E [ f ] is finite for all f in a shy [2] subset of R A and (iv) is true for all f in a prevalent [2] subset of R A ,

1.2.1. Blockquote

We want to find an unique, satisfying (§Section 3) extension of E [ f ] , on bounded functions to f which takes finite values only, such that the set of all f with this extension forms:
(1)
a prevalent [2] subset of R A
(2)
If not prevalent then a non-shy (i.e., neither prevalent [2] nor shy [2]) subset of R A .
For the sake of clarity & precision, we describe examples of “extending E [ f ] on all A with positive & finite Hausdorff measure" (§Section 2) and use the examples to define the terms “unique & satisfying" (§Section 3) in the blockquote of this section.

2. Extending the Expected Value w.r.t the Hausdorff Measure

The following are two methods to determining the most generalized, satisfying extension of E [ f ] on all A with a positive and finite Hausdorff measure:
(1)
One way is defining a generalized, satisfying extension of the Hausdorff measure on all A with positive & finite measure which takes positive, finite values for all Borel A. This can theoretically be done in the paper “A Multi-Fractal Formalism for New General Fractal Measures"[3] by taking the expected value of f w.r.t the extended Hausdorff measure.
(2)
Another way is finding a generalized, satisfying average of all A in the fractal setting. This can be done with the papers “Analogues of the Lebesgue Density Theorem for Fractal Sets of Reals and Integers" [4] and “Ratio Geometry, Rigidity and the Scenery Process for Hyperbolic Cantor Sets" [5] where we take the expected value of f w.r.t the densities in [4,5].
(Note, the methods in these papers could be used in §Section 3.2 to answer the blockquote of § Section 1.2.1.)

3. Attempt to Define “Unique and Satisfying" in The Blockquote of §Section 1.2

3.1. Note

Before reading, when §Section 3.2 is unclear, see §Section 5 for clarity. In §Section 5, we define:
(1)
“Sequences of bounded functions converging to f" (§Section 5.1)
(2)
“Equivalent sequences of bounded functions" (§Section 5.2, def. 1)
(3)
“Nonequivalent sequences of bounded functions" (§Section 5.2, def. 2)
(4)
The “measure" of a property on a sequence of bounded functions which increases at rate linear or super-linear to that of “non-equivelant" sequences of bounded functions (§Section 5.3.1, §Section 5.3.2)
(5)
The “actual" rate of expansion on a sequence of bounded sets (§Section 5.4)

3.2. Leading Question

To define unique and satisfying in the blockquote of the §Section 1.2, we take the expected value of a sequence of bounded functions chosen by a choice function. To find the choice function we ask the leading question...
If we make sure to:
(A)
See § Section 3.1 and (C)-(E) when something is unclear
(B)
Take all sequences of bounded functions which converge to f
(C)
Define C to be chosen center point of R n + 1
(D)
Define E to be the chosen, fixed rate of expansion of a sequence on the graph of bounded functions
(E)
Define E to be actual rate of expansion of a sequence on the graph of bounded functions ( § Section 5.4)
Does there exist a unique choice function which chooses a unique set of equivalent sequences of bounded functions where:
(1)
The chosen, equivelant sequences of bounded functions should satisfy (B).
(2)
The “measure" of the graph of all chosen, equivalent sequences of bounded functions which satisfy (B) should increase at a rate linear or superlinear to that of non-equivelant sequences of bounded functions satisfying (B)
(3)
The expected values, defined in the papers of §Section 2, for all equivalent sequences of bounded functions are equivalent and finite
(4)
For the chosen, equivalent sequences of bounded functions satisfying (1), (2), and (3).
  • The absolute difference between criteria (3) and the ( n + 1 ) -th coordinate of C is the less than or equal to that of non-equivalent sequences of bounded functions satisfying (1), (2), and (3)
  • The “rate of divergence" [6, p.275-322] of E E , using the absolute value | | · | | , is less than or equal to that of non-equivalent sequences of bounded functions which satisfy (1), (2), and (3)
(5)
When set Q R A is the set of all f R A , where the choice function chooses all equivalent sequences of bounded functions satisfying (1), (2), (3) and (4), then Q is
(a)
a prevelant [2] subset of R A
(b)
If not (a) then a non-shy (i.e., neither prevelant [2] nor shy [2]) subset of R A .
(6)
Out of all choice functions which satisfy (1), (2), (3), (4) and (5), we choose the one with the simplest form, meaning for each choice function fully expanded, we take the one with the fewest variables/numbers?
(In case this is unclear, see §Section 5.)
I’m convinced the expected values of the sequences of bounded functions chosen by a choice function which answers the leading question aren’t unique nor satisfying enough to answer the blockquote of §Section 1.2.1. Still, adjustments are possible by changing the criteria or by adding new criteria to the question.

4. Question Regarding My Work

Most don’t have time to address everything in my research, hence I ask the following:
Is there a research paper which already solves the ideas I’m working on? (Non-published papers, such as mine [7], don’t count.)
Using AI, papers that might answer this question are “Prediction of dynamical systems from time-delayed measurements with self-intersections" [8] and “A Hausdorff measure boundary element method for acoustic scattering by fractal screens" [9].
Does either of these papers solve the blockquote of § Section 1.2.1?

5. Clarifying §Section 3

Suppose ( f r ) r N is a sequence of bounded functions converging to f and ( G r ) r N is a sequence of the graph of each f r . Let dim H ( · ) be the Hausdorff dimension and H dim H ( · ) ( · ) be the Hausdorff measure in its dimension on the Borel σ -algebra.
See §Section 3.2 once reading §Section 5, and consider the following:
Is there a simpler version of the definitions below?

5.1. Defining Sequences of Bounded Functions Converging to f

The sequence of bounded functions ( f r ) r N , where f r : A r R and ( A r ) r N is a sequence of bounded sets, converges to function f : A R when:
For any x A there exists a sequence x A r s.t. x ( x 1 , · · · , x n ) and f r ( x ) f ( x 1 , · · · , x n ) (see [10] for info).

5.2. Defining Equivelant and Non-Equivelant Sequences of Bounded Functions

Let S N be an arbitrary set and define the following sequence of functions:
f 1 = { f r 1 ( 1 ) } r 1 N , f 2 = { f r 2 ( 2 ) } r 2 N , · · · , f s = { f r s ( s ) } r s N
Note, the sequences of bounded functions in f s : s N converges to f and the sequences of the graphs of all functions in each former sequence are:
G 1 = { graph ( f r 1 ( 1 ) ) } r 1 N = { G r 1 ( 1 ) } r 1 N , G 2 = { graph ( f r 2 ( 2 ) ) } r 2 N = { G r 2 ( 2 ) } r 2 N ,   G s = { graph ( f r s ( s ) ) } r s N = { G r s ( s ) } r s N
Definition 1 
(Equivelant Sequences of functions). Suppose S N is an arbitrary set. The sequences of bounded functions in:
f s : s S
are equivalent, if for all k , v S , where k v , f k and f v are equivelant: i.e., there exists a N N , such for all r k N , there is a r v N , where:
H dim H ( G r k ( k ) ) ( G r k ( k ) Δ G r v ( v ) ) = 0
and for all r v N , there is a r k N , where:
H dim H ( G r v ( v ) ) ( G r k ( k ) Δ G r v ( v ) ) = 0
More, for each s N , we denote all equivalent sequences of bounded functions to f r s ( s ) r s N using the notation
{ f r s ( s ) }   r s N
Theorem 1. 
If the sequence of functions in:
f s : s S
are equivalent, then for all k , v S , where k v :
E [ f r k ( k ) ] = E [ f r v ( v ) ]
Note, this explains criteria (3) in § Section 3
Definition 2 
(Non-Equivalent Sequences of functions). Again, S N is an arbitrary set. Therefore, all sequences of bounded functions in:
f s : s S
are non-equivalent, if def. 1 is false, meaning for some k , v S , where k v , f k and f v are non-equivelant: there is a N N , where for all r k N , there is either a r v N , where:
H dim H ( G r k ( k ) ) ( G r k ( k ) Δ G r v ( v ) ) 0
or for all r v N , there is a r k N , where
H dim H ( G r v ( v ) ) ( G r k ( k ) Δ G r v ( v ) ) 0

5.3. Defining the “Measure"

5.3.1. Preliminaries

We define the “measure" of ( f r ) r N in § Section 5.3.2, where ( G r ) r N is a sequence of the graph of each f r . To understand this “measure", continue reading.
(1)
For every r N , “over-cover" G r with minimal, pairwise disjoint sets of equal H dim H ( G r ) measure. (We denote the equal measures ε , where the former sentence is defined C ( ε , G r , ω ) : i.e., ω Ω ε , r enumerates all collections of these sets covering G r . In case this step is unclear, see §8.1.)
(2)
For every ε , r and ω , take a sample point from each set in C ( ε , G r , ω ) . The set of these points is “the sample" which we define S ( C ( ε , G r , ω ) , ψ ) : i.e., ψ Ψ ε , r , ω enumerates all possible samples of C ( ε , F r , ω ) . (If this is unclear, see §8.2.)
(3)
For every ε , r, ω and ψ ,
(a)
Take a “pathway” of line segments: we start with a line segment from arbitrary point x 0 of S ( C ( ε , G r , ω ) , ψ ) to the sample point with the smallest ( n + 1 ) -dimensional Euclidean distance to x 0 (i.e., when more than one sample point has the smallest ( n + 1 ) -dimensional Euclidean distance to x 0 , take either of those points). Next, repeat this process until the “pathway” intersects with every sample point once. (In case this is unclear, see §8.3.1.)
(b)
Take the set of the length of all segments in (a), except for lengths that are outliers [11]. Define this L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) . (If this is unclear, see 8.3.2.)
(c)
Multiply remaining lengths in the pathway by a constant so they add up to one (i.e., a probability distribution). This will be denoted P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) . (In case this is unclear, see §8.3.3)
(d)
Take the shannon entropy [12, p.61-95] of step (c). We define this:
E ( P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) ) = x P ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) x log 2 x
which will be shortened to E ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) ) . (If this is unclear, see §8.3.4.)
(e)
Maximize the entropy w.r.t all "pathways". This we will denote:
E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) ) = sup x 0 S ( C ( ε , G r , ω ) , ψ ) E ( L ( x 0 , S ( C ( ε , G r , ω ) , ψ ) ) )
(In case this is unclear, see §8.3.5.)
(4)
Therefore, the maximum entropy, using (1) and (2) is:
E max ( ε , r ) = sup ω Ω ε , r sup ψ Ψ ε , r , ω E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )

5.3.2. What Am I Measuring?

Suppose we define two sequences of the graph of the bounded functions converging to the graph of f: e.g., ( G r ) r N and ( G j ) j N , where for constant ε and cardinality | · |
(a)
Using (2) and (3e) of Section 5.3.1, suppose:
S ( C ( ε , G r , ω ) , ψ ) ̲ = sup S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )
then (using S ( C ( ϵ , G r , ω ) , ψ ) ̲ ) we get
α ¯ ε , r , ω , ψ = S ( C ( ε , G r , ω ) , ψ ) ) / sup ω Ω ε , r sup ψ Ψ ε , r , ω S ( C ( ε , G r , ω ) , ψ ) ̲
(b)
Also, using (2) and (3e) of Section 5.3.1, suppose:
S ( C ( ε , G r , ω ) , ψ ) ¯ = inf S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G r , ω ) , ψ ) ) )
then (using S ( C ( ε , G r , ω ) , ψ ) ¯ ) we also get:
α ̲ ε , r , ω , ψ = S ( C ( ε , G r , ω ) , ψ ) ) / sup ω Ω ε , r sup ψ Ψ ε , r , ω S ( C ( ε , G r , ω ) , ψ ) ¯
(1)
If using α ¯ ϵ , r , ω , ψ and α ̲ ϵ , r , ω , ψ we have:
lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ = lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ = 0
then what I’m measuring from  ( G r ) r N increases at a rate superlinear to that of ( G j ) j N .
(2)
If using equations α ¯ ε , j , ω , ψ and α ̲ ε , j , ω , ψ (swapping r N and ( G r ) r N , in α ¯ ϵ , r , ω , ψ and α ̲ ϵ , r , ω , ψ , with j N and ( G j ) j N ) we get:
lim sup ε 0 lim sup j sup ω Ω ε , j sup ψ Ψ ε , j , ω α ¯ ε , j , ω , ψ = lim inf ε 0 lim inf j inf ω Ω ε , j inf ψ Ψ ε , j , ω α ̲ ε , j , ω , ψ = 0
then what I’m measuring from  ( G r ) r N increases at a rate sublinear to that of ( G j ) j N .
(3)
If using equations α ¯ ε , r , ω , ψ , α ̲ ε , r , ω , ψ , α ¯ ε , j , ω , ψ , and α ̲ ε , j , ω , ψ , we both have:
(a)
lim sup ε 0 lim sup r sup ω Ω ε , r sup ψ Ψ ε , r , ω α ¯ ε , r , ω , ψ or lim inf ε 0 lim inf r inf ω Ω ε , r inf ψ Ψ ε , r , ω α ̲ ε , r , ω , ψ does not equal zero
(b)
lim sup ε 0 lim sup j sup ω Ω ε , j sup ψ Ψ ε , j , ω α ¯ ε , j , ω , ψ or lim inf ε 0 lim inf j inf ω Ω ε , j inf ψ Ψ ε , j , ω α ̲ ε , j , ω , ψ does not equal zero
then what I’m measuring from  ( G r ) r N increases at a rate linear to that of ( G j ) j N .

5.4. Defining The Actual Rate of Expansion of Sequence of Bounded Sets

5.4.1. Definition of Actual Rate of Expansion of Sequence of Bounded Sets

Suppose ( f r ) r N is a sequence of bounded functions converging to f, where ( G r ) r N is a sequence of the graph on each f r , and d ( Q , R ) is the Euclidean distance between points Q , R R n . Therefore, using the “chosen" center point C R n + 1 , when:
G ( C , G r ) = sup d ( C , y ) : y G r
the actual rate of expansion is:
E ( C , G r ) = G ( C , G r + 1 ) G ( C , G r )
Note, there are cases of G r r N when E isn’t fixed and E E (i.e., the chosen, fixed rate of expansion).

5.5. Reminder

See if §Section 3.2 is easier to understand.

6. My Attempt At Answering The Blockquote of §Section 1.2.1

6.1. Choice Function

Suppose we define the following:
(1)
( f k ) k N is the sequence of bounded functions which satisfies (1), (2), (3), (4) and (5) of the leading question in § Section 3.2
(2)
S ( f ) is all sequences of bounded functions satisfying (1) of the leading question where the expected values, defined in the papers of § Section 2, is finite.
(3)
( f j ) j N is an element S ( f ) but not an element in the set of equivelant sequences of bounded functions to that of ( f k ) k N (def. 1), where using the end of def. 1, we represent this criteria as:
( f j ) j N S ( f ) ( f k ) k N
Further note, from §Section 5.3.2 (b), if we take:
S ( C ( ε , G k , ω ) , ψ ) ¯ = inf S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G k , ω ) , ψ ) ) )
and from §Section 5.3.2 (a), we take:
S ( C ( ε , G k , ω ) , ψ ) ̲ = sup S ( C ( ε , G j , ω ) , ψ ) : j N , ω Ω ε , j , ψ Ψ ε , j , ω , E ( L ( S ( C ( ε , G j , ω ) , ψ ) ) ) E ( L ( S ( C ( ε , G k , ω ) , ψ ) ) )
Then, §Section 5.3.1 (2), eq. 3, and eq. 4 is:
sup ω Ω ε , k sup ψ Ψ ε , k , ω S ( C ( ε , G k , ω ) , ψ ) = S ( ε , G k ) = S
sup ω Ω ε , k sup ψ Ψ ε , k , ω S ( C ( ε , G k , ω ) , ψ ) ¯ = S ( ε , G k ) ¯ = S ¯
sup ω Ω ε , k sup ψ Ψ ε , k , ω S ( C ( ε , G k , ω ) , ψ ) ̲ = S ( ε , G k ) ̲ = S ̲

6.2. Approach

We manipulate the definitions of §Section 5.3.2 (a) and §Section 5.3.2 (b) to solve (1), (2), (3), (4) and (5) of the leading question in § Section 3.2

6.3. Potential Answer

6.3.1. Preliminaries (Infimum and Supremum of n-dimensional sets Using a Partial Order)

Define the supremum of an n-dimensional set using the partial order ( x 1 , · · · , x n ) > ( y 1 , · · · , y n ) , when x 1 > y 1 , x 2 > y 2 , · · · , x n > y n , and define the infimum of an n-dimensional set using the partial order ( x 1 , · · · , x n ) < ( y 1 , · · · , y n ) , when x 1 < y 1 , x 2 < y 2 , · · · , x n < y n .
Example 1. 
If A = R , f : R R where f ( x ) = x , and G r = ( x , f ( x ) ) : x [ r , r ] × [ r , r ] then sup ( G r ) = ( r , r ) and inf ( G r ) = ( r , r )
Suppose the geometric mean of point X = ( x 1 , · · · , x n ) is:
G mean ( X ) = x 1 x 2 · · · x n n
Thus, using the inf and sup of n-dim.sets in §Section 6.3.1 and the “chosen" center point C R n , when we define
T ( C , G k ) = G mean ( sup ( G k + 1 ) C ) · G mean ( inf ( G k ) C ) G mean ( sup ( G k ) C ) · G mean ( inf ( G k + 1 ) C )
and use S , S ¯ , S ̲ , E, E ( C , G k ) Section 5.4), and T ( C , F k ) , such that with the absolute value function | | · | | and nearest integer function · , we define:
K ( ε , G k ) = 1 + E E ( C , G k ) ( | | S 1 + S S ̲ + 2 S S ̲ + S S ̲ + S + S ¯ 1 + S ̲ / S 1 + S / S ¯ 1 + S ̲ / S ¯ S | | + S ) T ( C , G k ) E ( C , G k )
“removing" E , E, and T when E , E = 0 , the choice function which answers the leading question in § Section 3.2 could be the following:
Theorem 2. 
If we define:
M ( ε , G k ) = | S ( ε , G k ) | K ( ε , G k ) | S ( ε , G k ) |
M ( ε , G j ) = | S ( ε , G j ) | K ( ε , G j ) | S ( ε , G j ) |
where for M ( ε , G k ) , we define M ( ε , G k ) to be the same as M ( ε , G j ) when swapping “ j N " with “ k N " (for eq. 3 & 4) and sets G k with G j (for eq. 39), then for constant v > 0 and variable v * > 0 , if:
S ¯ ( ε , k , v * , G j ) = inf | S ( ε , G j ) | : j N , M ( ε , G j ) M ( ε , G k ) v * { v * } + v
and:
S ̲ ( ε , k , v * , G j ) = sup | S ( ε , G j ) | : j N , v * M ( ε , G j ) M ( ε , G k ) { v * } + v
then for all ( f j ) j N S ( f ) ( f k ) k N (def. 1), if:
lim inf ε 0 lim v * lim k | S ( ε , G k ) | + v S ¯ ( ε , k , v * , G j ) = lim sup ε 0 lim v * lim k | S ( ε , G k ) | + v S ̲ ( ε , k , v * , G j ) = 0
we choose ( G k ) k N satisfying eq. 12. (Note, we want sup = , inf = + , and ( f k ) k N to answer the leading question of § Section 3.2) where the answer to the blockquote of § Section 1.2.1 is E [ f k ] (when it exists).

7. Questions

(1)
Does § Section 6 answer the leading question in § Section 3.2
(2)
Using § Section 1.1 and thm. 2, when the function f = F , does E [ f k ] have a finite value?
(3)
If there’s no time to check questions 1 and 2, see § Section 4.

Appendix of §Section 5.3.1

8.1. Example of §Section 5.3.1, step 1

Suppose
(1)
A = R
(2)
When defining f : A R :
f ( x ) = 1 x < 0 1 0 x < 0.5 0.5 0.5 x
(3)
G r r N = ( x , f ( x ) ) : r x r r N
Then one example of C ( 2 / 6 , G 1 , 1 ) , using §Section 5.3.1 step 1, (where G 1 = ( x , f ( x ) ) : 1 x 1 r N ) is:
{ ( x , f ( x ) ) : 1 x 2 6 6 , ( x , f ( x ) ) : 2 6 6 x 2 2 6 6 , ( x , f ( x ) ) : 2 2 6 6 x 3 2 6 6 ( x , f ( x ) ) : 3 2 6 6 x 4 2 6 6 , ( x , f ( x ) ) : 4 2 6 6 x 5 2 6 6 , ( x , f ( x ) ) : 5 2 6 6 x 6 2 6 6 ( x , f ( x ) ) : 6 2 6 6 x 7 2 6 6 , ( x , f ( x ) ) : 7 2 6 6 x 8 2 6 6 , ( x , f ( x ) ) : 8 2 6 6 x 9 2 6 6 }
Note, the length of each partition is 2 / 6 , where the borders could be approximated as:
{ ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1.121 }
which is illustrated using alternating orange/black lines of equal length covering G 1 (i.e., the black vertical lines are the smallest and largest x-cooridinates of G 1 ).
(Note, the alternating covers in Figure 1 satisfy step (i) of §Section 5.3.1, because the Hausdorff measure in its dimension of the covers is 2 / 6 and there are 9 covers over-covering G 1 : i.e.,
Definition 3 
(Minimum Covers of Measure ε = 2 / 6 covering G 1 ). We can compute the minimum covers of C ( 2 / 6 , G 1 , 1 ) , using the formula:
H dim H ( G 1 ) ( G 1 ) / ( 2 / 6 )
where H dim H ( G 1 ) ( G 1 ) / ( 2 / 6 ) = Length ( [ 1 , 1 ] ) / ( 2 / 6 ) = 2 / ( 2 / 6 ) = 6 2 = 6 ( 1.4 ) = 8 + . 4 = 9 ).
Figure 1. The alternating orange & black lines are the “covers" and the vertical lines are the boundaries of G 1 .
Figure 1. The alternating orange & black lines are the “covers" and the vertical lines are the boundaries of G 1 .
Preprints 117204 g001
Note there are other examples of C ( 2 / 6 , G 1 , ω ) for different ω . Here is another case: which can be defined (see eq. 14 for comparison):
{ ( x , f ( x ) ) : 6 9 2 6 x 6 8 2 6 , ( x , f ( x ) ) : 6 8 2 6 x 6 7 2 6 , ( x , f ( x ) ) : 6 7 2 6 x 6 6 2 6 ( x , f ( x ) ) : 6 6 2 6 x 6 5 2 6 , ( x , f ( x ) ) : 6 5 2 6 x 6 4 2 6 , ( x , f ( x ) ) : 6 4 2 6 x 6 3 2 6 ( x , f ( x ) ) : 6 3 2 6 x 6 2 2 6 , ( x , f ( x ) ) : 6 2 2 6 x 6 2 6 , ( x , f ( x ) ) : 6 2 6 x 1 }
In the case of G 1 , there are uncountable different covers C ( 2 / 6 , G 1 , ω ) which can be used. For instance, when 0 α ( 12 9 2 ) / 6 (i.e., ω = α ( 12 9 2 ) / 6 + 1 ) consider:
{ ( x , f ( x ) ) : α 1 + α x α + 2 6 6 , ( x , f ( x ) ) : α + 2 6 6 x α + 2 2 6 6 , ( x , f ( x ) ) : α + 2 2 6 6 x α + 3 2 6 6 ( x , f ( x ) ) : α + 3 2 6 6 x α + 4 2 6 6 , ( x , f ( x ) ) : α + 4 2 6 6 x α + 5 2 6 6 , ( x , f ( x ) ) : α + 5 2 6 6 x α + 6 2 6 6 , ( x , f ( x ) ) : α + 6 2 6 6 x α + 7 2 6 6 ( x , f ( x ) ) : α + 7 2 6 6 x α + 8 2 6 6 , ( x , f ( x ) ) : α + 8 2 6 6 x α + 9 2 6 6 }
When α = 0 and ω = ( 9 2 6 ) / 6 , we get Figure 2 and when α = ( 12 9 2 ) / 6 and ω = 1 , we get Figure 1
Figure 2. This is similar to Figure 1, except the start-points of the covers are shifted all the way to the left.
Figure 2. This is similar to Figure 1, except the start-points of the covers are shifted all the way to the left.
Preprints 117204 g002

8.2. Example of §Section 5.3.1, step 2

Suppose:
(1)
A = R
(2)
When defining f : A R : i.e.,
f ( x ) = 1 x < 0 1 0 x < 0.5 0.5 0.5 x
(3)
G r r N = ( x , f ( x ) ) : r x r r N
(4)
G 1 = ( x , f ( x ) ) : 1 x 1
(5)
C ( 2 / 6 , G 1 , 1 ) , using eq. 15 and fig. Figure 1, which is approximately
{ ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1.121 }
Then, an example of S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
{ ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
Below, we illustrate the sample: i.e., the set of all blue points in each orange and black line of  C ( 2 / 6 , G 1 , 1 ) covering G 1 :
Figure 3. The blue points are the “sample points", the alternative black and orange lines are the “covers", and the red lines are the smallest & largest x-coordinates G 1 .
Figure 3. The blue points are the “sample points", the alternative black and orange lines are the “covers", and the red lines are the smallest & largest x-coordinates G 1 .
Preprints 117204 g003
Note, there are multiple samples that can be taken, as long as one sample point is taken from each cover in C ( 2 / 6 , G 1 , 1 ) .

8.3. Example of §Section 5.3.1, step 3

Suppose
(1)
A = R
(2)
When defining f : A R :
f ( x ) = 1 x < 0 1 0 x < 0.5 0.5 0.5 x
(3)
G r r N = ( x , f ( x ) ) : r x r r N
(4)
G 1 = ( x , f ( x ) ) : 1 x 1
(5)
C ( 2 / 6 , G 1 , 1 ) , using eq. 15 and fig. Figure 1, is approx.
{ ( x , f ( x ) ) : 1 x . 764 , ( x , f ( x ) ) : . 764 x . 528 , ( x , f ( x ) ) : . 528 x . 293 ( x , f ( x ) ) : . 293 x . 057 , ( x , f ( x ) ) : . 057 x . 178 , ( x , f ( x ) ) : . 178 x . 414 ( x , f ( x ) ) : . 414 x . 65 , ( x , f ( x ) ) : . 65 x . 886 , ( x , f ( x ) ) : . 886 x 1.121 }
(6)
S ( C ( 13 / 6 , G 1 , 1 ) , 1 ) , using eq. 20, is:
{ ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
Therefore, consider the following process:

8.3.1. Step 3a

If S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
{ ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
suppose x 0 = ( . 9 , 1 ) . Note, the following:
(1)
x 1 = ( . 65 , 1 ) is the next point in the “pathway" since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 0 instead of x 0 .
(2)
x 2 = ( . 4 , 1 ) is the third point since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 1 instead of x 0 and x 1 .
(3)
x 3 = ( . 2 , 1 ) is the fourth point since it’s a point in S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) with the smallest 2-d Euclidean distance to x 2 instead of x 0 , x 1 , and x 2 .
(4)
we continue this process, where the “pathway" of S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) is:
( . 9 , 1 ) ( . 65 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )
Note 3. 
If more than one point has the minimum 2-d Euclidean distance from x 0 , x 1 , x 2 , etc. take all potential pathways: e.g., using the sample in eq. 24, if x 0 = ( . 65 , 1 ) , then since ( . 9 , 1 ) and ( . 4 , 1 ) have the smallest Euclidean distance to ( . 65 , 1 ) , taketwopathways:
( . 65 , 1 ) ( . 9 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )
and also:
( . 65 , 1 ) ( . 4 , 1 ) ( . 2 , 1 ) ( . 9 , 1 ) ( . 55 , . 5 ) ( . 75 , . 5 ) ( 1 , . 5 ) ( . 3 , 1 ) ( . 1 , 1 )

8.3.2. Step 3b

Next, take the length of all line segments in each pathway. In other words, suppose d ( P , Q ) is the n-th dim.Euclidean distance between points P , Q R n . Using the pathway in eq. 25, we want:
{ d ( ( . 9 , 1 ) , ( . 65 , 1 ) ) , d ( ( . 65 , 1 ) , ( . 4 , 1 ) ) , d ( ( . 4 , 1 ) , ( . 2 , 1 ) ) , d ( ( . 2 , 1 ) , ( . 55 , . 5 ) ) , d ( ( . 55 , . 5 ) , ( . 75 , . 5 ) ) , d ( ( . 75 , . 5 ) , ( 1 , . 5 ) ) , d ( ( 1 , . 5 ) , ( . 3 , 1 ) ) , d ( ( . 3 , 1 ) , ( . 1 , 1 ) ) }
Whose distances can be approximated as:
{ . 25 , . 25 , . 2 , . 901389 , . 2 , . 25 , 1.655295 , . 2 }
Also, we see the outliers [11] are . 901389 and 1.655295 (i.e., notice that the outliers are more prominent for ε 2 / 6 ). Therefore, remove . 901389 and 1.655295 from our set of lengths:
{ . 25 , . 25 , . 2 , . 2 , . 25 , . 2 }
This is illustrated using:
Figure 4. The black arrows are the “pathways" whose lengths aren’t outliers. The length of the red arrows in the pathway are outliers.
Figure 4. The black arrows are the “pathways" whose lengths aren’t outliers. The length of the red arrows in the pathway are outliers.
Preprints 117204 g004
Hence, when x 0 = ( . 9 , 1 ) , using §Section 5.3.1 step 3b & eq. 24, we note:
L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) = { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 }

8.3.3. Step 3c

To convert the set of distances in eq. 27 into a probability distribution, we take:
x { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 } x = . 25 + . 25 + . 2 + . 2 + . 25 + . 2 = 1.35
Then divide each element in { . 25 , . 25 , . 2 , . 2 , . 25 , . 2 } by 1.35
{ . 25 / ( 1.35 ) , . 25 / ( 1.35 ) , . 2 / ( 1.35 ) , . 2 / ( 1.35 ) , . 25 / ( 1.35 ) , . 2 / ( 1.35 ) }
which gives us the probability distribution:
{ 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 }
Hence,
P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) = { 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 }

8.3.4. Step 3d

Take the shannon entropy of eq. 29:
E ( P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) ) = x P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) x log 2 x = x { 5 / 27 , 5 / 27 , 4 / 27 , 4 / 27 , 5 / 27 , 4 / 27 } x log 2 x = ( 5 / 27 ) log 2 ( 5 / 27 ) ( 5 / 27 ) log 2 ( 5 / 27 ) ( 4 / 27 ) log 2 ( 4 / 27 ) ( 4 / 27 ) log 2 ( 4 / 27 ) ( 5 / 27 ) log 2 ( 5 / 27 ) ( 4 / 27 ) log 2 ( 5 / 27 ) = ( 15 / 27 ) log 2 ( 5 / 27 ) ( 12 / 27 ) log 2 ( 4 / 27 ) 2.57604
We shorten E ( P ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) ) to E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) , giving us:
E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.57604

8.3.5. Step 3e

Take the entropy, w.r.t all pathways, of the sample:
{ ( . 9 , 1 ) , ( . 65 , 1 ) , ( . 4 , 1 ) , ( . 2 , 1 ) , ( . 1 , 1 ) , ( . 3 , 1 ) , ( . 55 , . 5 ) , ( . 75 , . 5 ) , ( 1 , . 5 ) }
In other words, we’ll compute:
E ( L ( S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) = sup x 0 S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) E ( L ( x 0 , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) )
We do this by repeating §8.3.18.3.4 for different x 0 S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) (i.e., in the equation with multiple values, see note 3)
E ( L ( ( . 9 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.57604
E ( L ( ( . 65 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.3131 , 2.377604
E ( L ( ( . 4 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.3131
E ( L ( ( . 2 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.57604
E ( L ( ( . 1 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 1.86094
E ( L ( ( . 3 , 1 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 1.85289
E ( L ( ( . 55 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.08327
E ( L ( ( . 75 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.31185
E ( L ( ( 1 , . 5 ) , S ( C ( 2 / 6 , G 1 , 1 ) , 1 ) ) ) 2.2622
Hence, since the largest value out of eq. 32- is 2.57604 :
E ( L ( S ( C ( 13 / 6 , G 1 , 1 ) , 1 ) ) ) = sup x 0 S ( C ( ε , G 1 , 3 ) , 1 ) E ( L ( x 0 , S ( C ( ε , G 1 , 1 ) , 1 ) ) ) 2.57604

References

  1. Bernardi, C.; Rainaldi, C. Everywhere surjections and related topics: Examples and counterexamples. Le Matematiche 2018, 73, 71–88, https://www.researchgate.net/publication/325625887_Everywhere_surjections_and_related_topics_Examples_and_counterexamples. [Google Scholar]
  2. Ott, W.; Yorke, J.A. Prevelance. Bulletin of the American Mathematical Society 2005, 42, 263–290, https://www.ams.org/journals/bull/2005-42-03/S0273-0979-05-01060-8/S0273-0979-05-01060-8.pdf. [Google Scholar] [CrossRef]
  3. Achour, R.; Li, Z.; Selmi, B.; Wang, T. A multifractal formalism for new general fractal measures. Chaos, Solitons & Fractals 2024, 181, 114655, https://www.sciencedirect.com/science/article/abs/pii/S0960077924002066. [Google Scholar]
  4. Bedford, T.; Fisher, A.M. Analogues of the Lebesgue density theorem for fractal sets of reals and integers. Proceedings of the London Mathematical Society 1992, 3, 95–124, https://www.ime.usp.br/~afisher/ps/Analogues.pdf. [Google Scholar] [CrossRef]
  5. Bedford, T.; Fisher, A.M. Ratio geometry, rigidity and the scenery process for hyperbolic Cantor sets. Ergodic Theory and Dynamical Systems 1997, 17, 531–564, https://arxiv.org/pdf/math/9405217. [Google Scholar] [CrossRef]
  6. Sipser, M. Introduction to the Theory of Computation, 3 ed.; Cengage Learning, 2012; pp. 275–322.
  7. Krishnan, B. Bharath Krishnan’s ResearchGate Profile. https://www.researchgate.net/profile/Bharath-Krishnan-4.
  8. Barański, K.; Gutman, Y.; Śpiewak, A. Prediction of dynamical systems from time-delayed measurements with self-intersections. Journal de Mathématiques Pures et Appliquées 2024, 186, 103–149, https://www.sciencedirect.com/science/article/pii/S0021782424000345. [Google Scholar] [CrossRef]
  9. Caetano, A.M.; Chandler-Wilde, S.N.; Gibbs, A.; Hewett, D.P.; Moiola, A. A Hausdorff-measure boundary element method for acoustic scattering by fractal screens. Numerische Mathematik 2024, 156, 463–532. [Google Scholar] [CrossRef]
  10. (https://math.stackexchange.com/users/5887/sbf), S. Convergence of functions with different domain. Mathematics Stack Exchange, [https://math.stackexchange.com/q/1063261]. https://math.stackexchange.com/q/1063261.
  11. John, R. Outlier. https://en.m.wikipedia.org/wiki/Outlier.
  12. M., G. Entropy and Information Theory, 2 ed.; Springer New York: New York [America];, 2011; pp. 61–95. https://ee.stanford.edu/~gray/it.pdf,. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated