• R/O
  • SSH

Commit

Tags
No Tags

Frequently used words (click to add to your profile)

javac++androidlinuxc#windowsobjective-ccocoa誰得qtpythonphprubygameguibathyscaphec計画中(planning stage)翻訳omegatframeworktwitterdomtestvb.netdirectxゲームエンジンbtronarduinopreviewer

Commit MetaInfo

Revisão0b7a88778adcf8906d0e1031b543f9f1e33b5924 (tree)
Hora2017-11-08 00:30:09
AutorLorenzo Isella <lorenzo.isella@gmai...>
CommiterLorenzo Isella

Mensagem de Log

I added a file which contains a useful function to add comments to a tex file.

Mudança Sumário

Diff

diff -r fa117086f5ea -r 0b7a88778adc latex-documents/HomogeneousKernel_yd2.tex
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/latex-documents/HomogeneousKernel_yd2.tex Tue Nov 07 16:30:09 2017 +0100
@@ -0,0 +1,462 @@
1+%\documentclass[titlepage,preprint,showpacs,superscriptaddress,amsmath,amssymb,aps,pre]{revtex4-1}
2+\documentclass[notitlepage,a4paper,superscriptaddress,amsmath,amssymb,aps,pre,nofootinbib,10pt]{revtex4-1}
3+\usepackage{graphicx}
4+\usepackage[usenames]{color}
5+\usepackage{url}
6+\usepackage{longtable}
7+\usepackage{natbib}
8+
9+\usepackage{xcolor}
10+
11+\makeatletter
12+\def\mathcolor#1#{\@mathcolor{#1}}
13+\def\@mathcolor#1#2#3{%
14+ \protect\leavevmode
15+ \begingroup
16+ \color#1{#2}#3%
17+ \endgroup
18+}
19+\makeatother
20+
21+\newcommand{\lorenzo}[1]{{\leavevmode\color{blue}[#1]}}
22+
23+\newcommand{\s}{\sum}
24+\newcommand{\f}{\frac}
25+\newcommand{\rro}{\right)}
26+\newcommand{\lro}{\left( }
27+\newcommand{\lsq}{\left[}
28+\newcommand{\rsq}{\right]}
29+
30+\newcommand{\beq}{\begin{equation}}
31+\newcommand{\eeq}{\end{equation}}
32+
33+\newcommand{\Pe}{\textrm{Pe}}
34+\newcommand{\St}{\textrm{St}}
35+\newcommand{\DaC}{\textrm{Da}_1}
36+\newcommand{\DaD}{\textrm{Da}_2}
37+
38+\newcommand{\vth}{v_{th}}
39+\newcommand{\taup}{\tau_p}
40+\newcommand{\la}{\langle}
41+\newcommand{\ra}{\rangle}
42+
43+\newcommand{\pbeta}{p_{\beta}}
44+\newcommand{\df}{d_f}
45+
46+\pagestyle{plain}
47+
48+\begin{document}
49+
50+\title{Is the fragmentation kernel homogeneous?}
51+
52+
53+%\author{LI}
54+%\author{ADM}
55+\author{YD}
56+\email{yannis.drossinos@ec.europa.eu}
57+\affiliation{European Commission, Joint Research Centre,
58+I-21027 Ispra (VA), Italy}
59+%\author{Kostoglou?}
60+%\author{Konstandopoulos?}
61+
62+\date{\today, HomogeneousKernel\_yd2.tex}
63+
64+\maketitle
65+
66+I have been thinking about the fragmentation kernel and the way we split it into two parts.
67+In addition, or maybe triggered by my thoughts, I re-read (carefully this time)
68+Ref.~\cite{Odriozola2002}. A summary of my conclusions follows.
69+
70+\begin{enumerate}
71+\item We decided to express agglomerate sizes in terms of the largest size $y$ and one of
72+the fragment sizes $x$ (the other being, for binary fragmentation, $y-x$). This choice contrasts with
73+the more traditional, and more convenient for agglomeration, choice where the initial sizes
74+of the fragments are specified, ($i,j$), that upon
75+agglomeration give rise to a ($i+j$) agglomerate. The mapping is, of course,
76+$i=x$ (without loss of generality) and $j=y-x$.
77+\item We wrote the fragmentation kernel $K(x,y)$ in terms of the fragmentation rate $a(y)$
78+of a particle of size $y$ and $b(x|y)$ the distribution of particles of size $x$ resulting from the breakup of a particle of size $y$ (i.e., the fragment size distribution),
79+\beq
80+K(x,y) = a(y)\, b(x|y).
81+\label{eq:kernel}
82+\eeq
83+We, also, argued $b(x|y)$ should satisfy a number of constrains. I think it is important to
84+enumerate them here.
85+\begin{enumerate}
86+\item Mass (monomer number) conservation: the sum of the monomers in the fragments must equal
87+the number of monomers in the initial agglomerate. The algebraic manipulations arise
88+from the change of variable from $x$ with $0 \leq x \leq y$ to $z=x/y$ with
89+$0 \le z \le 1$.
90+\beq
91+\int_0^y dx \, x \, b(x|y) = y, \quad \Longrightarrow
92+\quad y^2\, \int_0^1 \, dz \, z \, b(z,y) = y \quad
93+\Longrightarrow \boxed{\int_0^1 dz \, z \, \tilde{b}(z) = 1 \quad \mbox{with}
94+\quad b(z,y) = y^{-1} \, \tilde{b}(z)}.
95+\label{eq:MassConservation}
96+\eeq
97+\item The number of particles arising from the fragmentation of a single size $y$ particle,
98+namely the expected number of particles, is (for binary fragmentation)
99+\beq
100+\int_0^y dx \, b(x|y) = 2, \quad \Longrightarrow \quad
101+\int_0^1 \, dz \, y \, b(z,y) = 2 \quad
102+\Longrightarrow \quad \boxed{\int_0^1 \, dz \, \tilde{b}(z) = 2 \quad
103+\mbox{as before,}
104+\quad b(z,y) = y^{-1} \, \tilde{b}(z)}.
105+\label{eq:NumberFragments}
106+\eeq
107+\end{enumerate}
108+\item We expressed the fragment size distribution as (I changed the notation slightly)
109+\begin{eqnarray}
110+\mathcolor{magenta}{b(x|y)} & = & \f{2}{B(\alpha,\alpha)} \, \f{1}{y} \,
111+\Big (\f{x}{y} \Big )^{\alpha -1} \,
112+\Big (1 - \f{x}{y} \Big )^{\alpha -1}
113+= \frac{2}{B(\alpha,\alpha)} \, y^{-1} \, \Big[ z \, (1-z) \Big ]^{\alpha-1} \\
114+& \equiv & 2 \, y^{-1} \, \pbeta(\frac{x}{y};\alpha,\alpha)
115+\equiv \mathcolor{magenta}{2 \, y^{-1} \, \pbeta(z;\alpha)}
116+\quad \mbox{with} \quad z = \frac{x}{y} \ \epsilon \ [0,1].
117+\label{eq:FragBeta}
118+\end{eqnarray}
119+where $\pbeta(z; \alpha,\beta)$ is the beta distribution
120+\beq
121+\pbeta(z;\alpha,\beta) = \frac{z^{\alpha -1}(1-z)^{\beta-1}}{B(\alpha, \beta)}
122+\quad \mbox{with} \quad z \ \epsilon \ [0,1],
123+\label{eq:betadistr}
124+\eeq
125+and $B(\alpha, \beta)$ is the beta function. Note that I drop the third index in $\pbeta$ if it equals
126+the second, i.e. $\pbeta(z; \alpha) = \pbeta(z;\alpha,\alpha)$.
127+The constraints are trivially satisfied if one realizes that for
128+our fitting function
129+\beq
130+\tilde{b}(z) = 2 \pbeta(z; \alpha),
131+\eeq
132+and uses the following properties of the beta distribution
133+\begin{eqnarray}
134+\int_0^1 \, dz \, \pbeta (z;\alpha) & = & 1, \\
135+\mu = E[X] = \int_0^1 \, dz \, z \, \pbeta (z;\alpha, \beta) & = & \frac{\alpha}{\alpha + \beta}.
136+\end{eqnarray}
137+\item Reference~\cite{Odriozola2002} makes two points:
138+\begin{enumerate}
139+\item They remark that the Brownian kernel for fractal-like agglomerates is (see, also, our
140+one-dimensional agglomeration VELA paper)
141+\beq
142+K_{ij}^{Br} = K^{Br}(x,y) = K^{Br}(z=x/y) =
143+\frac{2 k_B T}{3 \mu} \, \left [ z^{1/\df} + (1-z)^{1/\df} \right ] \,
144+ \left [ z^{-1/\df} + (1-z)^{-1/\df} \right ] .\
145+\label{eq:BrownianKernel}
146+\eeq
147+\item They argue that the fragmentation kernel $K_{ij}$, ``the mean rate constant at which
148+($i+j$)-size clusters break spontaneously into two $i$- and $j$-size clusters", may be expressed as
149+\beq
150+K_{ij} = \frac{1}{\tau} \, e_{ij} \, (1+ \delta_{ij}),
151+\label{eq:KernelBonds}
152+\eeq
153+where $\tau$ is the average bond lifetime, and ``$e_{ij}$ the number of bonds contained in $n$-size clusters which, on breakup, lead to $i$- and $j$-size
154+fragments". I think that this implies that $e_{ij}$ is nothing else than the fragment
155+size distribution-do you agree?
156+
157+\vspace*{2cm}
158+
159+\lorenzo{LI: I think this is the crucial part and this has
160+ consequences on Eqs \eqref{eq:DefineKernel1} and
161+ \eqref{eq:DefineKernel}. I have tried to write this several times,
162+ and it never comes out very clear. I will focus only on the first
163+ part of \eqref{eq:KernelBonds}, the one with $1/\tau$.
164+ \begin{itemize}
165+ \item I convinced myself we are looking at a Poisson process, or
166+ waiting time or life and death etc...
167+ We have to be super careful because when we think about decaying
168+ atoms, we (or at least I) then transfer the reasoning to the
169+ monomers, whereas the only decaying entities are the
170+ bonds. True, there is (for our aggregates) a strict relation
171+ between the aggregate size $y$ and the number of bonds $N_{b}$,
172+ but conceptually they are two distinct entities. Also,
173+ $N_{b}=N_{b}(t)$, seen that we are doing fragmentation, whereas
174+ $y$ is conserved.
175+
176+ On top of that, we are dealing with an inhomogeneous Poisson
177+ process. Read what happens next.
178+ \item For a single bond with
179+ decay time $\tau$ if the bond is alive at $t=t'$, then
180+ probability to find it alive at $t''=t'+s$ is $P(t''-t'=s)=\exp(-\lambda
181+ s)$, where $\lambda=1/\tau$ and there is
182+ nothing to say about $N_{b}$. We are dealing with a homogenous
183+ Poisson process where $\lambda\neq \lambda(t)$. To make life
184+ simple: if at $t=0$ all the $N_{b}$ bonds are intact, I can
185+ expect to have $N_{b}(t)=N_{0}\exp(-\lambda t)$ bonds alive at
186+ time $t$, where $N_{0}=N_{b}(t=0)$.
187+\item
188+ Our aggregate breaks as soon as any of its $N_{b}$ bonds
189+ breaks. The question is not when I can expect on average a bond
190+ to break, but how long I can observe the aggregate without
191+ observing any of its bonds break (a waiting/stopping time
192+ problem, you are the expert here). I think that at the beginning we have a
193+ superposition of $N_{b}$ independent Poisson processes each one
194+ with its own mean $\lambda$. I say at the beginning because then
195+ the number bonds will decrease in time and so will the number of
196+ Poisson processes I can superimpose. The result at the beginning is a Poisson process
197+ with mean $\lambda_{agg}=N_{b}\lambda$ and the aggregate expectation
198+ lifetime is $\tau_{agg}=\tau/N_{b}$. As time goes by, $N_{b}$
199+ decreases as a result of the disintegration and so
200+ $\lambda_{agg}=\lambda_{agg}(t)$. This is the definition of an
201+ inhomogeneous Poisson process. So, if the aggregate is alive at
202+ time $t=0$, it has a probability $P=\exp(-N_{b}\lambda s)$ to be
203+ alive at time $t=s$. However, after each fragmentation, $N_{b}$
204+ decreases and the expected time to see the next fragmentation
205+ increases and it tends to $\lambda$ when there is a single bond
206+ left in the aggregate.
207+
208+ \item The decreased life expectancy of an aggregate with respect
209+ to its individual bonds makes sense because at time $t=s$
210+ each bond has a probability $P=\exp(-\lambda s)$ of not having
211+ disintegrated so the probability that all the bonds are alive
212+ (no disintegration, aggregate still whole) at time $s$ is $P=(\exp(-\lambda
213+ s))^{N_{b}}$ which is the same result as the superposition of
214+ Poisson processes.
215+ \item Bottom line: neglecting the discrete nature of the problem, we
216+ are dealing with an inhomogeneous Poisson process with
217+ $\lambda_{agg}=\lambda N_{0}\exp(-\lambda t)$. How to deal with
218+ this...I am not sure yet! From what I read online, the point should
219+ be that the number of bonds that break in $[t, t+s]$ should be
220+ given by $Poisson(\int_{t}^{(t+s)}\lambda(\alpha)d\alpha)$.
221+\item If the reasoning above is correct, then the more I enlarge the
222+ aggregate, the shorter its lifetime meant as the waiting time before
223+ any of its bonds breaks leading to a fragmentation event. I think in
224+ my usual obscure way I mentioned it already in the past.
225+
226+ \item Bottom line 2: $a=a(N_{b})$ and certainly it is too
227+ simple to say that $a=1/\tau$, unless there is more in
228+ $e_{ij}$, but I do not think that is the case. Back to your question: $e_{ij}$ appears to be linked if
229+ not identical to the size distribution, seen that it counts how
230+ many configurations of sizes $\{i,j\}$ I can generate from breaking
231+ the aggregate.
232+\item Right now, I am not convinced about the $1/\tau$ business, but I
233+ need some more thinking. However, this is ``just'' the kinetic of
234+ the process, something we have no idea about (what is $\tau$ in the
235+ first place), but I think they are not getting it right, unless...
236+\item they are in a sort of steady state. Think about the measurement
237+ of radioactive decays on sample in a physics lab. You simply observe
238+ on average e.g. 5 decays per second because you do not observe the
239+ sample long enough to see it convert entirely into something non
240+ radioactive. The number of radioactive atoms remains approximately
241+ constant while you look at it. If we think about the fact that each
242+ fragmentation even just decreases $N_{b}$ by 1, then perhaps it is
243+ not meaningless to say that there is an approximately constant
244+ $\tau$ for the aggregate, as long as I do not study it to its bitter
245+ end of the disintegration into individual monomers.
246+\end{itemize}
247+}
248+
249+\vspace*{2cm}
250+
251+\item Based on the functional form of the Brownian agglomeration kernel
252+Eq.~(\ref{eq:BrownianKernel}) they used their synthetic agglomerates to propose
253+\begin{eqnarray}
254+\lefteqn{\left. e_{ij} \, (1+ \delta_{ij}) \right |_{\textrm{fit}} =
255+p_1 \Big ( i^{p_2} + j^{p_2} \Big ) \, \Big ( i^{p_3} + j^{p_3} \Big )
256+\, \Big ( i j \Big )^{p_4},} \\
257+& & p_1 = 0.439 \ ; \quad p_2 = 1.006 \ ; \quad p_3 = -1.007 \ ; \quad p_4 = -0.1363
258+\label{eq:FitKernel}
259+\end{eqnarray}
260+where the numerical expressions for $p_i$ are the results of their fitting.
261+
262+\end{enumerate}
263+
264+\end{enumerate}
265+
266+What I suggest is to use this information to improve, possibly, our own estimate of the kernel.
267+Specifically,
268+\begin{enumerate}
269+\item Accept their kernel decomposition Eq.~(\ref{eq:KernelBonds}) so that
270+\begin{eqnarray}\label{eq:DefineKernel1}
271+ a(y) & = & \f{1}{\tau}, \\
272+\label{eq:DefineKernel}
273+b(x|y) & = & K_{ij} \, \tau = e_{ij} \, (1 + \delta_{ij} ) \equiv h_f(x|y),
274+\end{eqnarray}
275+namely, the fragmentation rate is independent of agglomerate size. Moreover, their proposed
276+fragment size distribution expressed in terms of our choice of variable (I will call it $h_f(x|y)$,
277+see Eq. ~(\ref{eq:DefineKernel}))
278+becomes
279+\beq
280+h_f (x|y) = \mathcolor{red}{p_1 \, B(1+p_4, 1+p_4)} \,
281+\mathcolor{blue}{y^{p_2 + p_3 + 2p_4}}
282+\, \mathcolor{magenta}{\pbeta(z; 1+p_4)} \,
283+\underbrace{\Big [z^{p_2} + (1-z)^{p_2} \Big ] \, \Big [z^{p_3} + (1-z)^{p_3}
284+\Big ]}_{\textrm{additional terms}},
285+\label{eq:hf}
286+\eeq
287+with $z=x/y$. This form should be compared and contrasted to ours, reported again for easy comparison
288+\beq
289+b(x|y) = \mathcolor{red}{2} \,
290+\mathcolor{blue}{y^{-1}} \, \mathcolor{magenta}{\pbeta(z;\alpha)}
291+\quad \mbox{with} \quad z \ \epsilon \ [0,1].
292+\label{eq:Ours}
293+\eeq
294+\item You might find interesting the values of the fitted numerical constants
295+\begin{eqnarray}
296+p_1 \ B(1+p_4, 1+p_4) & = & 0.581 \quad \quad \mbox{Our fit} \quad 2 \\
297+p_2 + p_3 + 2p_4 & = & -0.2736 \quad \quad \mbox{Our fit} \quad -1 \\
298+1+ p_4 & = & 0.8637 \quad \quad \mbox{Our fit} \quad \sim \frac{1}{2} \quad
299+\mbox{decreasing with increasing N} .
300+\end{eqnarray}
301+\end{enumerate}
302+
303+In short: What am I suggesting?
304+\begin{enumerate}
305+\item The similarity between Eq.~(\ref{eq:hf}) and (\ref{eq:Ours}) is I think apparent
306+(I chose colors to identify similar factors).
307+\item Would it be possible to use Eqs.~(\ref{eq:hf}) to fit our fragment size distribution?
308+I know that if more parameters are introduced it is easier to fit the empirical distribution.
309+However, we might argue, as they did, that there is a reason for our choice of the fitting function.
310+It might be worth a try if it does not involve a lot of work.
311+
312+\begin{enumerate}
313+\item I constructed a two-parameter function that emulates the Brownian agglomeration kernel
314+and satisfies the constraints mentioned earlier. My suggestion:
315+\begin{subequations}
316+\beq
317+\label{eq:Fit1}
318+\mathcolor{red}{
319+\boxed{
320+b(x|y) = \alpha_1 \, y^{-1} \, \Big [z^{\alpha_2} + (1-z)^{\alpha_2} \Big ] \,
321+\Big [z^{\alpha_3} + (1-z)^{\alpha_3} \Big ] \,
322+\Big [z \, (1-z) \Big ]^{\alpha_4} } } ,
323+\eeq
324+\beq
325+\mathcolor{blue}{
326+\boxed{
327+\mbox{with} \quad \alpha_4 = - \frac{1}{2} \, \Big (1 + \alpha_2 + \alpha_3 \Big ) } } ,
328+\label{eq:Fit1Constraint1}
329+\eeq
330+\beq
331+\mathcolor{blue}{
332+\boxed{
333+\mbox{and} \quad \alpha_1 = \Big [ \Gamma(1+\alpha_2+\alpha_4) \, \Gamma(1+\alpha_3 + \alpha_4) \,
334++ \Gamma(1+\alpha_4) \, \Gamma(1+\alpha_2+\alpha_3+\alpha_4) \, \Big ]^{-1} } }.
335+\label{eq:Fit1Constraint2}
336+\eeq
337+\end{subequations}
338+I explicitly imposed the constraint $\alpha_2 + \alpha_3 + 2 \alpha_4 = -1$ (it arises from the
339+requirement that $b(z,y) = y^{-1} \tilde{b}(z)$, see
340+Eqs.~(\ref{eq:MassConservation} and \ref{eq:NumberFragments})), and I evaluated
341+(thanks to Mathematica) the normalization factor $\alpha_1$ such that the
342+two constraints are satisfied.
343+
344+Then, the proposed fitting function Eq.~(\ref{eq:Fit1}) has only two independent variables: $\alpha_2$
345+and $\alpha_3$. If that seems difficult there is another option: see point 4.
346+
347+\item A parenthetical remark: if we take $\alpha_2 = \alpha_3 =0$ we should get something similar
348+to our current fitting function,
349+\beq
350+b(x|y) = y^{-1} \, \tilde{b}(z) =
351+\frac{2}{B(\alpha, \alpha)} \, y^{-1} \Big [ z \, (1-z) \Big ]^{\alpha -1},
352+\eeq
353+with $\alpha$ the fitting parameter. For $\alpha_2 = \alpha_3 =0$ Eq.~(\ref{eq:Fit1}) becomes
354+\beq
355+b(x|y) = \frac{2}{B( 1/2, 1/2)} \, y^{-1} \, \Big [ z \, (1-z) \Big ]^{-1/2} =
356+2 y^{-1} \, \pbeta(z; \frac{1}{2}).
357+\eeq
358+Surprise, our parameter $\alpha$ is fixed at $\alpha = 1/2$ (and it is independent of $y$).
359+
360+\item Alternatively, one can use the function Reference~\cite{Odriozola2002}
361+proposes,
362+\beq
363+b (x|y) = \alpha_1 \, y^{\alpha_2 + \alpha_3 + 2 \alpha_4}
364+\Big [z^{\alpha_2} + (1-z)^{\alpha_2} \Big ] \, \Big [z^{\alpha_3} + (1-z)^{\alpha_3}
365+\Big ] \, \Big [z \, (1-z) \Big ]^{\alpha_4}.
366+\label{eq:Fit2}
367+\eeq
368+Here, the fitting parameters are three exponents and one normalization constant. The greater freedom
369+in fitting the fragment size distribution (four parameter) comes at the expense that there is
370+no guarantee that the numerical constants will be such that the constraints
371+hold (even approximately). As
372+I've been trying to argue, the constants of Ref.~\cite{Odriozola2002}) do not.
373+\end{enumerate}
374+\item There is one more reason to try to check their calculation. If we take the similarity of the
375+(random, binary) fragmentation kernel with the Brownian agglomeration kernel as valid then
376+we might expect
377+\beq
378+p_2 = - p_3 = 1/\df \quad \Longrightarrow p_2 = 1.006 \sim - p_3 \quad d_f \sim 0.99.
379+\eeq
380+In short, their numbers do not seem to be related to the fractal dimension of the fragments.
381+In addition, they only performed their calculations for DLCA agglomerates.
382+\item Latest addition: it just occurred to me that if we are really coragious we can just set
383+\beq
384+\alpha_2 = -\alpha_3 = 1/\df, \quad \Longrightarrow \quad
385+\alpha_4 = -\frac{1}{2}, \quad \alpha_1 = \Gamma(\frac{1}{2}) \, \Gamma(\frac{1}{2}) \,
386+\Big [ B(\frac{1}{2} + \frac{1}{\df}, \frac{1}{2} - \frac{1}{\df}) + 1 \Big ]
387+\eeq
388+where $\df$ is the fractal dimension of the fragments. There is nothing to fit. Will it work?
389+\item Lastly, it seems apparent that $h_f(z,y)$, defined in Eq.~(\ref{eq:hf}), does not
390+necessarily satisfy
391+the two constraints (ours, by construction and simplicity, does).
392+Specifically, the integral constraints become the algebraic constraints reported in
393+Eqs.~(\ref{eq:Fit1Constraint1} and \ref{eq:Fit1Constraint2}). The $p_i$ reported in
394+Ref.~\cite{Odriozola2002} give
395+\begin{subequations}
396+\beq
397+p_4 = - \frac{1}{2} \, (1 + p_2 + p_3) = -0.4995 ,
398+\eeq
399+whereas the numerically determined value is $p_4 = -0.1363$ (first constraint), and
400+\beq
401+p_1 = \Big [ \Gamma(1+p_2+p_4) \, \Gamma(1+p_3 + p_4) \,
402++ \Gamma(1+p_4) \, \Gamma(1+p_2+p_3+p_4) \, \Big ]^{-1} = -0.1630,
403+\eeq
404+whereas the numerically determined value is $p_1 = 0.439$ (second constraint).
405+\end{subequations}
406+
407+\end{enumerate}
408+
409+What triggered these investigations? Lorenzo's remark that the way we break up the
410+agglomerates, constant bond lifetime, might lead us to a process similar to radioactive
411+decay, and thence to a Poisson process. I have to admit, I've thought about it in the past,
412+but I did not reach any valuable conclusions. Maybe now I will be luckier. More in the future.
413+
414+\section{Addendum}
415+
416+I just realized that:
417+\begin{enumerate}
418+\item To say that $b(x|y)$ is a homogeneous function [which in our case implies
419+$b(x|y) = y^{-1} b(x/y)$] is not the same as stating that $b(x|y)$ is independent of
420+the initial particle size $y$. The functional form shows explicitly that the fragment
421+size distribution depends on the largest particle size as $y^{-1}$.
422+\item What I am implying is that our data show a dependence on $y$ but it is not the
423+trivial (expected?) $1/y$. Do you agree?
424+\item The trial (fitting) function may be modified to force it to remain a homogeneous
425+function and to obey a conservation law by dividing by $i+j = y$. In that case, the straight-chain
426+limit may be imposed, but no combination of exponents would give our beta distribution.
427+\item Our fitting function, the beta distribution, suggest that we've taken the
428+kernel to be proportional to $K_{ij} \sim (i j)^{\alpha -1}$. The proportionality constant
429+is important.
430+\item Remember this is thinking and writing, as if we were talking to each other in front
431+of a blackboard. Nothing conclusive, but I'm starting to appreciate the various subtleties.
432+\end{enumerate}
433+
434+\begin{thebibliography}{99}
435+\bibitem{Odriozola2002} G. Odriozola, A. Schmitt, A. Moncho-Jord\'{a}, J. Callejas-Fern\'{a}ndez,
436+R. Mart\'{i}nez-Garc\'{i}a, R. Leone, and R. Hidalgo-\'{A}lvarez, ``Constant bond breakup
437+probability model for reversible aggregation processes", Phys. Rev. E 65, 031405 (2002).
438+
439+%\bibitem{McGradyZiff1987} E.D. McGrady and R.M. Ziff, `` ``Shattering" transition in fragmentation",
440+%Phys. Rev. lett. 58, 892 (1987)
441+%
442+%\bibitem{Kostoglou1997} M. Kostoglou, S. Dovas, and A.J. Karabelas, ``On the steady-state size distribution of dispersion in breakage processes", Chem. Engng. Sci. \textbf{52}, 1285 (1997).
443+%
444+%\bibitem{Redner} S. Redner, ``Statistical theory of fragmentation", Chapter 3 in \textit{``Disorder and fracture"}, eds. J.C. Charment, S. Roux, and E. Guyon, Plenum Press, New York (1990).
445+%
446+%\bibitem{Classify} P.G.J. van Dongen ad M.H. Ernst, ``Scaling solutions of Smoluchowski's coagulation
447+%equation", J. Stat. Phys. 50, 295 (1988).
448+%
449+%\bibitem{Ushape} S. Ito and S. Yukawa, ``Stochastic modeling on fragmentation process over
450+%lifetime and its dynamical scaling law of fragments distribution", J. Phys. Japan 83, 124005 (2014).
451+%
452+%\bibitem{FragTrees} Z. Kalay and E. Ben-Naim, ``Fragmentation of random tress", J. Phys. A: Math. Gen. \textbf{48}, 045001 (2015).
453+%
454+%%\bibitem{FragRandom} E. Metin El\c{c}i, M. Weigel, and N.G. Fytas, ``Fragmentation of fractal random %structures", Phys. Rev. lett. \textbf{114}, 115701 (2015).
455+
456+\end{thebibliography}
457+
458+\end{document}
459+%%% Local Variables:
460+%%% mode: latex
461+%%% TeX-master: t
462+%%% End: