Pages: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116

einsteen
QUOTE (shagster+Nov 28 2007, 02:58 AM)
Note also that the collapse duration for WTC7 is 6.6 s for the case of E1=1GJ for all stories (and where the mass per story is 510E6 kg / 110).

That gives a factor 2 J/kg. I found the double value with maple (for constant E1 and constant m), I would also expect a much higher value than 1.5 J/kg (wtc1,2) because the 1.5 J/kg is for damaged stories around the impact zone.
shagster
For WTC7 with E1=1 GJ for every story and a constant mass per story that Greening was using, the collapse goes to completion and the duration is 6.6 s, according to the discrete algebraic crush-up model.

E1/m at the ground level for that case is 4.6 J/kg.

E1/m = 1 GJ / (510E6 kg * 47/110) = 4.6 J/kg

einsteen
I think we are confusing the total mass and the mass per story. For wtc1,2 the ratio of E1 and the total mass (in the case that they are constant) is about

E1/M=1.5 J/kg, which follows also from a=(2/3)g for wtc1 in the beginning.

Here is the new wtc7 Maple plot, the vertical axis is the total collapse time and horizontally the ratio E1/M

http://i18.tinypic.com/72bjk8i.gif

6.6 seconds implies a ratio 4.83 J/kg (E1_per_floor/Total_Mass)
shagster
Here are some results for a crush-up collapse using a continuous model. I used ODE toolkit to solve the diffeq.

In this example the building height is 178 m (WTC7 height). The mass per story is Greening's value of 510E6 kg/110 for all stories. E1=1 GJ and is the same for every story. The collapse starts at the ground level.

The ODEs are defined in the equation box in the upper left as seen in the pics below.

v' = -9.81 + E1/(m*h*(178+x)/178)
x' = v
E1 = 1000000000
m = 510*1000000*47/110

The software requires acceleration to be written as v' and velocity as x' on the left hand side of the equations.

v is the velocity of the top of the building. x is the distance from the top of the building toward the ground level and goes from zero at the top of the building to -178 m at the ground level.

m is the total mass of the building.

I had to multiply m by the factor (178+x)/178 so that the mass above the collapse front would change linearly from m to zero as the collapse progressed with x going from 0 to -178 m.

Plot of x vs. t. (use the zoom in photobucket if needed)

http://i134.photobucket.com/albums/q91/sha...ushup1gjxt2.jpg

Plot of v vs. t.

http://i134.photobucket.com/albums/q91/sha...ushup1gjvt2.jpg

The total collapse duration is 6.64 s. The collapse goes to completion. The velocity peaks at about 46.4 m/s near the end of the collapse then decreases to 25 m/s at the end of the collapse.

E1/m at the ground level at the start of collapse is about 4.6 J/kg.

shagster
QUOTE (einsteen+Nov 28 2007, 09:04 AM)

E1/M=1.5 J/kg, which follows also from a=(2/3)g for wtc1 in the beginning.

I didn't quite follow where that value of 1.5 J/kg came from.

For WTC2 and using E1=0.8 GJ at the bottom of the upper block, the value of E1/m at the bottom of the upper block is 5.9 J/kg

E1/m = 0.8 GJ / (510E6 kg*29/110) = 5.9 J/kg

where m is the total mass above the position in question.

If E1/m stays the same throughout WTC2, then it would be the same value at the ground level. (5.9 J/kg)

If E1 stays the same value of 0.8 GJ for every story in WTC2 instead of E1/m staying the same, then the value of E1/m at the ground level would be

E1/m = 5.9 J/kg * 29/110 = 1.56 J/kg
einsteen
In the original Greening paper that ratio was

(0.63*10^9)/(5.1*10^8)=1.24 J/kg

And there not only the mass above the impact zone is used but the complete one.

If we take 14 stories with a mass 14m (m=M/110) then from the initial acceleration we have

a=(2/3)g=g-E1/14mh

The ratio then is

E1/M=14gh/(110*3)=1.54 J/kg. I always like to use the ratio in the discussion because the whole collapse time/velocity is scalable, i.e.

T(E1,M)=T(k*E1,k*M)

Nice to see that there is a difference between the continuous model and the discrete model. Could you also plot T=T(E1/M_wtc7) assuming the collapse is complete, because it was complete.

shagster
QUOTE (einsteen+Nov 28 2007, 10:30 AM)

If we take 14 stories with a mass 14m (m=M/110) then from the initial acceleration we have

a=(2/3)g=g-E1/14mh

The ratio then is

E1/M=14gh/(110*3)=1.54 J/kg. I always like to use the ratio in the discussion because the whole collapse time/velocity is scalable, i.e.

Based on the type of values Greening used, we agree on a value near 1.5 J/kg at the ground level of WTC1 or WTC2 if E1 at the ground level is the same as the value at the bottom of the upper block.

The way you are using 2/3g for the crush-down to determine E1/m has some issues, though. There is a slowing due to momentum transfer during the crush-down in addition to that due to E1 that makes the effective acceleration 2/3 g.

shagster
I will try to plot collapse duration as a function of E1/m.

einsteen
QUOTE (shagster+Nov 28 2007, 10:43 AM)

Based on the type of values Greening used, we agree on a value near 1.5 J/kg at the ground level of WTC1 or WTC2 if E1 at the ground level is the same as the value at the bottom of the upper block.

The way you are using 2/3g for the crush-down to determine E1/m has some issues, though. There is a slowing due to momentum transfer during the crush-down in addition to that due to E1 that makes the effective acceleration 2/3 g.

That's correct. It is in fact only valid for the first story.
einsteen
I've tried to put the equation in maple, but it doesn't work, I've not much experience with maple and need to find examples how to do it, but the one I found didn't work for your equation.

Could you, if you have time to solve it, use h=174/47;g=9.80665; the ratio E1/m (and therefore E1/M) is already in your equation and you can call that alpha for example and then try to plot T=T(alpha). I wil try maple later, it would be nice to see the plots and to compare them.
David B. Benson
Alas, I forgot about the 0.0001 multiplier on F4.
So I am now forced to conclude that having a separate term in the resistive force to represent concrete comminution results in a negligible contribution to the overall resistance. This does not mean that there was no comminution during the first 3.82 seconds of collapse, but rather that the comminution is simply part of the F2 consumption of part of the available kinetic energy.

With this, here is better data, with a standard deviation of 0.686 pixels.

Column 1: time in seconds
Column 2: measured drop in meters
Column 3: computed crop in meters
Column 4: location of the crushing front in meters below the (normalized) former top of the tower.
Column 5: The same, non-dimensionalized
Column 6: Speed of the crushing front in m/s
Column 7: The same, non-dimensionalized
Column 8: The stretch (squash)

Regarding this last, what is the % volume in the core of non-crushable materials, taken to be concrete and structural steel?

CODE

0.002   0.190   0.180  80.114   0.176   0.456   0.007   0.522
0.063   0.260   0.203  80.160   0.176   1.055   0.016   0.522
0.085   0.246   0.215  80.185   0.176   1.271   0.019   0.522
0.247   0.444   0.377  80.519   0.176   2.858   0.043   0.519
0.286   0.449   0.435  80.639   0.177   3.242   0.048   0.518
0.289   0.520   0.440  80.648   0.177   3.270   0.049   0.518
0.406   0.759   0.662  81.098   0.178   4.418   0.066   0.514
0.455   0.857   0.776  81.326   0.178   4.898   0.073   0.512
0.504   0.898   0.905  81.581   0.179   5.385   0.080   0.510
0.524   1.062   0.959  81.688   0.179   5.575   0.083   0.509
0.648   1.396   1.351  82.453   0.181   6.789   0.101   0.503
0.681   1.415   1.472  82.685   0.181   7.117   0.106   0.501
0.701   1.626   1.548  82.831   0.181   7.315   0.109   0.500
0.785   1.972   1.890  83.478   0.183   8.136   0.122   0.495
0.839   2.126   2.135  83.935   0.184   8.669   0.130   0.491
0.877   2.211   2.316  84.268   0.185   9.039   0.135   0.489
0.889   2.471   2.375  84.378   0.185   9.156   0.137   0.488
0.998   3.013   2.957  85.431   0.187  10.222   0.153   0.481
1.038   3.173   3.194  85.854   0.188  10.621   0.159   0.478
1.045   3.090   3.236  85.928   0.188  10.689   0.160   0.477
1.090   3.576   3.510  86.413   0.189  11.125   0.166   0.474
1.211   4.284   4.329  87.831   0.192  12.311   0.184   0.465
1.214   4.266   4.356  87.877   0.193  12.348   0.185   0.465
1.240   4.573   4.544  88.198   0.193  12.600   0.188   0.463
1.341   5.332   5.326  89.513   0.196  13.585   0.203   0.455
1.363   5.384   5.509  89.817   0.197  13.803   0.206   0.454
1.405   5.807   5.871  90.415   0.198  14.222   0.213   0.450
1.443   6.286   6.199  90.951   0.199  14.587   0.218   0.447
1.525   6.900   6.966  92.192   0.202  15.398   0.230   0.441
1.560   7.413   7.299  92.726   0.203  15.735   0.235   0.438
1.593   7.615   7.634  93.259   0.204  16.063   0.240   0.435
1.629   8.150   8.006  93.847   0.206  16.418   0.245   0.433
1.694   8.628   8.697  94.929   0.208  17.053   0.255   0.428
1.739   9.342   9.198  95.707   0.210  17.494   0.261   0.424
1.785   9.741   9.724  96.518   0.211  17.943   0.268   0.421
1.816  10.339  10.094  97.085   0.213  18.250   0.273   0.418
1.825  10.048  10.193  97.236   0.213  18.331   0.274   0.418
1.895  11.249  11.062  98.556   0.216  19.006   0.284   0.412
1.945  11.708  11.700  99.516   0.218  19.463   0.291   0.409
1.954  11.738  11.813  99.687   0.218  19.541   0.292   0.408
1.992  12.485  12.314 100.435   0.220  19.877   0.297   0.405
2.068  13.292  13.350 101.972   0.223  20.523   0.307   0.400
2.068  13.504  13.352 101.974   0.223  20.524   0.307   0.400
2.129  14.349  14.218 103.250   0.226  21.022   0.314   0.395
2.140  14.374  14.371 103.474   0.227  21.106   0.315   0.395
2.196  15.281  15.193 104.674   0.229  21.542   0.322   0.391
2.198  15.248  15.223 104.717   0.229  21.557   0.322   0.391
2.252  16.170  16.028 105.885   0.232  21.957   0.328   0.387
2.294  16.672  16.669 106.812   0.234  22.260   0.333   0.384
2.295  16.531  16.691 106.843   0.234  22.270   0.333   0.384
2.313  17.124  16.970 107.245   0.235  22.397   0.335   0.383
2.360  17.925  17.701 108.294   0.237  22.720   0.340   0.380
2.407  18.682  18.454 109.372   0.240  23.037   0.344   0.377
2.426  18.763  18.765 109.815   0.241  23.164   0.346   0.376
2.450  19.133  19.157 110.374   0.242  23.320   0.349   0.375
2.482  19.810  19.688 111.128   0.243  23.526   0.352   0.373
2.517  20.377  20.270 111.952   0.245  23.745   0.355   0.371
2.550  20.938  20.828 112.741   0.247  23.948   0.358   0.369
2.613  22.130  21.909 114.265   0.250  24.325   0.364   0.365
2.622  22.116  22.051 114.464   0.251  24.372   0.364   0.365
2.642  22.485  22.399 114.953   0.252  24.488   0.366   0.364
2.695  23.539  23.341 116.273   0.255  24.793   0.371   0.361
2.754  24.513  24.384 117.731   0.258  25.114   0.375   0.358
2.768  24.840  24.638 118.085   0.259  25.190   0.376   0.358
2.802  25.416  25.261 118.953   0.261  25.372   0.379   0.356
2.850  26.401  26.147 120.185   0.263  25.623   0.383   0.354
2.854  26.370  26.215 120.279   0.263  25.642   0.383   0.353
2.924  27.700  27.526 122.097   0.267  25.996   0.389   0.350
2.929  27.817  27.609 122.212   0.268  26.018   0.389   0.350
2.963  28.404  28.260 123.112   0.270  26.186   0.391   0.348
2.983  28.813  28.626 123.618   0.271  26.279   0.393   0.348
3.058  30.191  30.071 125.611   0.275  26.634   0.398   0.344
3.066  30.538  30.228 125.826   0.276  26.671   0.399   0.344
3.071  30.429  30.330 125.968   0.276  26.696   0.399   0.344
3.151  32.223  31.888 128.110   0.281  27.055   0.404   0.341
3.174  32.467  32.345 128.737   0.282  27.156   0.406   0.340
3.218  33.285  33.220 129.938   0.285  27.347   0.409   0.338
3.253  34.331  33.920 130.897   0.287  27.496   0.411   0.337
3.291  34.885  34.672 131.926   0.289  27.653   0.413   0.336
3.350  36.216  35.873 133.570   0.293  27.896   0.417   0.334
3.364  36.013  36.168 133.973   0.293  27.954   0.418   0.333
3.372  36.491  36.325 134.188   0.294  27.985   0.418   0.333
3.460  38.368  38.134 136.657   0.299  28.331   0.423   0.330
3.476  38.776  38.479 137.128   0.300  28.395   0.424   0.330
3.488  38.805  38.716 137.452   0.301  28.439   0.425   0.329
3.547  40.231  39.971 139.162   0.305  28.666   0.428   0.327
3.607  41.580  41.220 140.863   0.309  28.885   0.432   0.326
3.633  41.661  41.792 141.641   0.310  28.983   0.433   0.325
3.653  42.338  42.200 142.197   0.312  29.052   0.434   0.324
3.682  43.228  42.834 143.060   0.313  29.159   0.436   0.324
3.735  44.068  43.964 144.596   0.317  29.344   0.439   0.322
3.736  44.522  43.995 144.639   0.317  29.350   0.439   0.322
3.757  44.453  44.443 145.248   0.318  29.422   0.440   0.322
3.818  45.953  45.769 147.049   0.322  29.632   0.443   0.320

OneWhiteEye --- Could you kindly do a repeat? But also add the last column in a seperate plot as well. The plots are quite helpful and I'm not set up to do it myself just yet...
OneWhiteEye
QUOTE (David B. Benson+Nov 28 2007, 10:02 PM)
OneWhiteEye --- Could you kindly do a repeat?

I suppose you already know there are two records with the same time value (within preserved accuracy):

CODE

2.068  13.292  13.350 101.972   0.223  20.523   0.307   0.400
2.068  13.504  13.352 101.974   0.223  20.524   0.307   0.400

Can be like throwing a wrench into a turbine in some routines.

Measured (red) and Calculated (blue) Overlay
http://i3.tinypic.com/6khka2u.png

Measured
http://i18.tinypic.com/6lo8ocp.png

Computed
http://i14.tinypic.com/82m69fl.png

Crush Location
http://i3.tinypic.com/8azldt3.png

Crush Location Normalized
http://i4.tinypic.com/82urwib.png

Crush Velocity
http://i4.tinypic.com/7wj6347.png

Crush Velocity Normalized
http://i15.tinypic.com/87kjp0k.png

Stretch
http://i7.tinypic.com/8gfs6px.png

Stacked (bottom to top): Measured, Computed, Stretch
http://i6.tinypic.com/80pmdk5.png

I need to get you some good data, don't I? The top center dish on the antenna can be tracked to about frame 1000. Other locations further up the antenna can probably be continued piecewise for about two more seconds after that. Sadly, not much progress has been made lately in extraction code, except in my mind. I know what to do, just no time to do it yet.

The single frame from the DVD shagster posted showed that my copy has not been resized, so it may only suffer a bit of compression artifact; should be OK for data.
shagster
Einsteen,

My discrete algebraic crush-up model shows that the collapse just barely goes to completion for E1/m=8.16 J/kg. The duration for E1/m=8.16 J/kg is 7.38 s. I used h=174 m /47 and g=9.80665 m/s^2.

Those values are nearly the same as the values you obtained in your recursive model where the equation went complex at that critical point (E1/m=8.17 J/kg, t=7.38 s). Your recursive model is essentially a discrete algebraic one if I understand correctly.

The continuous model is giving a lower value of E1/m at the critical point. I need to look at it more carefully.

I ran the continuous model for various values of E1/m. I will try to post the graphs later.
shagster
Here is another example of a crush-up collapse using a continuous model. This time E1 is 3 GJ for every story.

The building height is 178 m (approximate WTC7 height). The mass per story is Greening's value of 510E6 kg/110 for all stories. E1/m at the ground level is 13.8 J/kg. The collapse starts at the ground level.

Plot of x vs. t.

http://i134.photobucket.com/albums/q91/sha...ushup3GJxt1.jpg

Plot of v vs. t.

http://i134.photobucket.com/albums/q91/sha...ushup3GJvt1.jpg

The total collapse duration is 8.64 s. The collapse does not go to completion. It ends at 163 m which is somewhere within the 44th.

The velocity peaks at about 30 m/s near 6.3 s then goes to zero at 8.64 s.
shagster

It would be worth looking at a relatively slow rate collapse such as the Landmark and see how well the collapse curve based on measurements from the video agrees with that predicted by the continuous model. A noticeable slowing near the end of the Landmark collapse is expected.

It would be interesting to see if there is any indication of a few intact stories sitting on top of the rubble pile for a slow moving collapse such as the Landmark.

shagster
Einsteen,

I ran the diffeq solver for a crush-up using your parameters. h=174/47, g=9.80665.

I solved for the following values of a=E1/m (J/kg): 0, 5, 10, 15, 20, and 25. E1/m in the diffeq solver appears as 'a' in the equation section of the solver (upper left corner of pics). E1/m is the value at the ground level before the start of collapse and m is the total mass of the building.

The columns in the following table are E1/m (J/kg), collapse duration t (s), velocity at the end of the collapse v (m/s), and the position of the top of the building at the end of collapse x (m).

A simple E1/m vs. t curve could be plotted using the following data.

E1/m (J/kg), t (s), v (m/s), x (m)

0, 5.96, 58.4, 0
5, 6.66, 12.7, 0
10, 7.74, 0, 5
15, 8.88, 0, 21
20, 10.0, 0, 46
25, 11.1, 0, 78

Displacement vs. time curves:

http://i134.photobucket.com/albums/q91/sha...shupmultxt1.jpg

Velocity vs. time curves:

http://i134.photobucket.com/albums/q91/sha...shupmultvt1.jpg

The collapse doesn't go to completion for a=10, 15, 20, and 25, as seen in the graphs where the velocity returns to zero and the x vs. t curves have a minimum. The duration is the time where the x vs. t curve reaches a minimum or where the v vs. t curve returns to zero.

For a=0 and a=5, the collapse goes to completion. The duration is the time at the point where the x vs. t curves reaches -174.

David B. Benson
QUOTE (OneWhiteEye+Nov 29 2007, 01:24 AM)
I suppose you already know there are two records with the same time value (within preserved accuracy):

I need to get you some good data, don't I?

Oops! I'll have to include at least one more digit in the future.

Still no hurry. I have another pair of projects which need continued attention and I still have the additional data from other locations to integrate.

But in the nonce, while what I learned from the graphs was certainly helpful, could you also do one of the difference between the measured and computed drops? I'd like to check this by eye for 'red noise' correlations. Thanks for all your help!

OneWhiteEye
QUOTE (David B. Benson+Nov 30 2007, 12:19 AM)
...the difference between the measured and computed drops?

Stacked - Measured, Calculated, Difference (Measured - Calculated)
http://i5.tinypic.com/87nw490.png

Difference alone
http://i2.tinypic.com/6kylv1x.png
David B. Benson
QUOTE (OneWhiteEye+Nov 30 2007, 12:34 AM)
Difference ...

Thanks again.
I don't see enough red noise to even bother with a calculation. (There is some, but just a little.)
frater plecticus
There's no such thing s bad or good data. just data.

I guess what you mean is selective data that forms a particular albeit wrong conclusion?

David B. Benson
Simplest is best: Just using as the resistive force

F(Z,S) = k*Z*S,

which is the consumption of some of the kinetic energy available, proves to be ever so slightly better at fitting the data than more complex resistive force equations with terms for concrete comminution or even any fraction of the original resistance of the tower.

Using the B&V crush-down equation with the stretch (squash) as

s(Z) = (s0-s2)(Z0/Z)^s1 + s2

gives the best fit with

s0 = 0.5345
s1 = 2.6401
s2 = 0.2535

and quite similar values using the Seffen crush-down equation. (There, s2 = 2.3379)

I have no explanation of why these rather strange-looking values for the power parameter, s1.
OneWhiteEye
QUOTE (frater plecticus+Nov 30 2007, 11:42 PM)
There's no such thing s bad or good data. just data.

I guess what you mean is selective data that forms a particular albeit wrong conclusion?

No. Just more accurate data. Maybe 'better' would have been a 'good' choice of word?

I don't even know if the data David B. Benson is posting now comes partially or entirely from me. If it does, I know how inaccurate it can be, manual or automated. I haven't posted any data yet I consider good data... or bad data, either. There will be some good data to come and it will be whatever it is, entirely independent of any theory or usage of it.

I'm curious, frater plecticus, have you been following the discussion of where this data comes from and the various ways numbers are obtained from images?
David B. Benson
QUOTE (frater plecticus+Nov 30 2007, 04:42 PM)
There's no such thing s[sic] bad or good data. just data.

I guess what you mean is selective data that forms a particular albeit wrong conclusion?

Unfortunately, that is not so. First of all, there is dishonest data, purposely distorted to fit some pre-desired conclusion. Even with honest data, there may be systematic errors in the measuring device. Even if there are no systematic errors (worth mentioning), there will be various forms of random errors inherent in the measuring device. Good data has no systematic errors and a signal which stands well out from the noise.

I didn't select the data, poster OneWhiteEye did. I use this data together with the crush-down equations and some form of resistive force formula and stretch (squash) formula to estimate the parameters in those formulas. I then apply naive Bayes factors to compute decibans, a measure by which some of these variations tend to be confirmed and others disconfirmed.

In Bayesian science there are no conclusions, only degrees of confirmation. In this case, the first 3.82 seconds of the collapse of WTC 1, there are over half a dozen different variations which, by Bayesian factors, all are about equally good. One which is decisively disconfirmed is the hypothesis that the collapse is solely due to some downward, constant acceleration, which does not take into account all the known physics anyway.

This research does not attempt to explain why WTC 1 only resisted collapse as hard as determined by any one of the leading hypotheses, combinations of a crush-down equation with a force formula and a stretch formula. For such an explanation, one has to look in more detail at the nature of the collapse, which has been done extensively on this thread and its two predecessors.
David B. Benson
QUOTE (OneWhiteEye+Nov 30 2007, 05:08 PM)
I don't even know if the data David B. Benson is posting now comes partially or entirely from me.  If it does, I know how inaccurate it can be, manual or automated.

Entirely from you.
And it's fairly good, given that the standard deviation for the best hypothesis (so far) is about 0.68 of a pixel!
OneWhiteEye
I also disagree with the notion that there is no bad data. I was going to point out deliberate falsification, too, but realize one could claim that fake data is not data at all. I'd argue that, once a body of information gets accepted into the data stream, it is data because it is treated as such unless and until rejected by whatever means.

If I were to conduct a test involving hundreds of channels of instrumentation, I would expect bad data to come out of at least a few of them, particularly in a destructive test involving physical sensors and transducers. Whether or not I'm the consumer of the data, I would want to see the data excluded from any analysis.
OneWhiteEye
QUOTE (David B. Benson+Dec 1 2007, 12:18 AM)
Entirely from you.
And it's fairly good, given that the standrd deviation for the best hypothesis (so far) is about 0.68 of a pixel!

Cool! Just wait 'til it's real good.
OneWhiteEye
Which set(s)?
RealityCheck
QUOTE (frater plecticus+Nov 30 2007, 11:42 PM)
QUOTE (various others here+)
There's no such thing as bad or good data. just data.

I guess what you mean is selective data that forms a particular albeit wrong conclusion?

Hi frater plecticus!

How's conspiracy central coming along now that bin ladin, in an attempt to take the heat off the Taliban, has now taken sole responsibility for the 9/11 attacks?

Makes the discussions HERE purely SCIENTIFIC INTEREST ONLY, from now on, I hope! hehehe.

Cheers all!

RC.
.
NEU-FONZE
Sorry to switch topics a little but I have been thinking about the problem of simultaneous crush-up-crush-down and thought that posters Shagster and Einsteen (at least) would be interested in this:

Consider the collapse of WTC 1 and let's say it started with the block of floors above (and including) the 96th floor dropping a distance h (~3.7 meters) onto the 95th floor. Let's also say that the energy needed to buckle/collapse the columns on each floor was E1(n), where n is the floor number counted DOWN from the top of the building), and each floor had the same mass M. The net force acting downwards on the columns supporting the 97th floor would then be [13Mg - E1(13)/h] Newtons. The net force acting downwards on the columns supporting the 95th floor would be [15Mg - E1(15)/h] Newtons. Clearly, if E1(n) was constant from floor to floor, the net compressive force on the columns supporting the 95th floor was LARGER than the net force on the columns supporting the 97th floor. Thus, as the upper mass descended, failure of the columns supporting the 95th floor occurred in preference to failure of the columns supporting the 97th floor leading to a crush-down collapse.

Incidently, if [nMg - E1(n)/h] was approximately constant, the towers would collapse with essentially CONSTANT ACCELERATION. For the special case that nMg was approximately EQUAL TO E1(n)/h, WTC 1 would collapse with CONSTANT VELOCITY.

This is all of course a gross simplification of what actually happened but it gives the general idea!

And I should add that IF, (and this is a big IF), the strength of the core columns was such that E1(n) dropped off rapidly as the thicker wide flange core columns transitioned from say 14WF219 to 14WF61, it would be possible for [13Mg - E1(13)/h] to be larger than [15Mg - E1(15)/h], in which case a crush-up collapse would be favored. I suspect that the real situation was a complex combination of crush-down AND crush-up.
shagster
A crush-down only model doesn't capture the effects of crush-up that occurred early in the collapse. The need to tweak parameters such as force and stretch in a crush-down only model so that the observed and predicted collapse curves agree would be expected due in part to the lack of the model to capture a crush-up effect early in the collapse.

I haven't tried doing a simultaneous crush-up and crush-down model. A hypothetical example is one where simultaneous crush-up early in the collapse would result in a slightly faster collapse early in the collapse compared with one that is solely crush-down. The collapse model that is solely crush-down in that case would have to be tweaked to make the collapse faster early in the collapse when crush-up effects are significant and slower later in the collapse where crush-up effects are not significant. That tweaking could be done with parameters such as resistive force and stretch. For example, resistive force could be made to increase as the collapse progresses to get a curve generated by a crush-down only model that matches the measured curve of the collapse where some crush-up occurred in the first couple seconds.

I haven't actually written a model that captures both crush-up and crush-down effects at the same time, so I don't know if the above is true. I'm using it as an example of how a model's lack of ability to capture a particular phenomenon may result in the need for the parameters of a model to be tweaked to make it agree with measured data.

The tweaking of a parameter such as resistive force in a crush-down only model to minimize the standard deviation doesn't necessarily mean that the resistive force is actually behaving that way in the real world. That tweaking may stem in part from the lack of the ability of the model to capture a relevent phenomenon.
shagster
A possible more sophisticated model would consist of two coupled differential equations and two sets of boundary conditions that would capture both crush-up and crush-down effects. There would be two collapse curves that could be measured and predicted. One would be the position of the top of the upper block and the other would be the position of the collapse front. The rates of those two would vary with respect to each other as the collapse progressed and the effect of crush-up became less significant later in the collapse. The boundary conditions would describe the initial position and velocity of both the top of the upper block and the collapse front at the start of collapse, as opposed to boundary conditions describing only the top of the upper block in a crush-down only or crush-up only collapse model.
shagster
QUOTE (NEU-FONZE+Dec 1 2007, 01:55 AM)

Incidently, if [nMg - E1(n)/h] was approximately constant, the towers would collapse with essentially CONSTANT ACCELERATION. For the special case that nMg was approximately EQUAL TO E1(n)/h, WTC 1 would collapse with CONSTANT VELOCITY.

This is all of course a gross simplification of what actually happened but it gives the general idea!

That constant-acceleration crush-up model is a simple model. I posted it because it can give some insight into collapse mechanisms and parameters even though it's not the most accurate of models. The diffeq model and results that I posted are more sophisticated but the concepts are not as easy to grasp with that kind of model.

Collapse velocity during a crush-up would be constant when mg is equal to E1/h. There's an issue as to what the actual velocity would be. For example, if the initial velocity was zero, then the crush-up collapse would not start for the case of mg=E1/h and the collapse duration would be infinite. The denominator would be zero in the equation t=sqrt(2H/[g-E1/mh]). If there was an initial unimpeded drop through a story height h at the start of collapse, then the velocity would be a constant 8.5 m/s for the rest of the collapse for the case of mg=E1/h.

That simplified crush-up model that I posted doesn't apply to WTC1 or WTC2 in crush-down mode. There is a slowing effect in crush-down due to momentum transfer from the upper block to the next intact story below in addition to E1. The simple model t=sqrt(2H/[g-E1/mh]) doesn't capture that effect.
David B. Benson
QUOTE (OneWhiteEye+Nov 30 2007, 05:44 PM)
Which set(s)?

Pixel columns 447, 448, 449.
OneWhiteEye
QUOTE (frater plecticus+Nov 30 2007, 11:42 PM)
I guess what you mean is selective data that forms a particular albeit wrong conclusion?

The intellectual equivalent of a drive-by shooting.
OneWhiteEye
QUOTE (David B. Benson+Dec 1 2007, 06:19 PM)
Pixel columns 447, 448, 449.

Are the sets merged into a single set? (Edit: Of course they are! I mean, I suspected that right out the gate when I saw points very close in time but differing by a significant magnitude. That must be a tough fit!)

I could only give you a qualitative idea of the sources of error at this point. Since the antenna displaces to the left as well as down, there is a drift in the location of the edge used to obtain a curve (which is the dark band). The dark band is not a horizontal stripe; it's curved, almost a semi-circle. As the antenna moves laterally, the point that was being measured moves from 449 -> 448 -> 447 -> GONE, likewise 448 -> 447 -> GONE, and so on. New parts move into the pixel column from the right until the antenna is out of that column altogether and the means to measure goes away.

Result: the drift may not be monotonic (but probably is in the first few seconds) and may be more than a pixel in magnitude. There should be some inaccuracy in those sets as a whole, and at least sub-pixel differences between them, resulting from this consideration alone.

Edit: the 2D extraction I'm working on will eliminate problems like that.
David B. Benson
QUOTE (OneWhiteEye+Dec 1 2007, 11:37 AM)
That must be a tough fit!

Edit: the 2D extraction I'm working on will eliminate problems like that.

Program doesn't care. Just attempts to minimize the weighted variance on all the data. When points are close together the weight attached to the points is substantially less than for points far apart.

Phew!
OneWhiteEye
QUOTE (David B. Benson+Dec 1 2007, 07:33 PM)
Program doesn't care.  Just attempts to minimize the weighted variance on all the data.  When points are close together the weight attached to the points is substantially less than for points far apart.

OK, that makes sense. When you're satisfied that the results are pretty solid for this data, it would be great if you retain those results for comparison with the output derived from subsequent, more accurate data.

You could apply a similar reasoning-based metric to each successive step of the model fit; a meta-fit, if you will.

In what manner, roughly, do you arrive at t0? As I was manually placing the initial points, it was very obvious that motion, unquantifiable by this method except by subjective sub-pixel placement, occurred well before I set down the first non-zero displacement. The fit for very early motion would seem to be strongly dependent on having data of sufficient resolution to show that motion. Fitting backwards from later data, which could be of significantly different character, might be seeing through a glass, darkly.
David B. Benson
QUOTE (OneWhiteEye+Dec 1 2007, 12:51 PM)
OK, that makes sense.  When you're satisfied that the results are pretty solid for this data, it would be great if you retain those results for comparison with the output derived from subsequent, more accurate data.

You could apply a similar reasoning-based metric to each successive step of the model fit; a meta-fit, if you will.

In what manner, roughly, do you arrive at t0?  As I was manually placing the initial points, it was very obvious that motion, unquantifiable by this method except by subjective sub-pixel placement, occurred well before I set down the first non-zero displacement.

The fit for very early motion would seem to be strongly dependent on having data of sufficient resolution to show that motion.

Fitting backwards from later data, which could be of significantly different character, might be seeing through a glass, darkly.

Oh, storage is available by the terabyte. I keep almost everything.

In effect, I'll do that.

The t0 I use occurs in frames 917--918. The first cut at this is by fitting to the older data NEU-FONZE posted some time ago. The parameter estimation then adjusts this, typically only by a few milliseconds, but one hypothesis adjusts by 19.8 milliseconds. All of these adjustments are to slightly earlier times. Whatever value is chosen, it is, by definition, the moment of the onset of progressive collapse. All earlier, negative times by this reckoning, are times of collapse initiation, when the motion is very slow and the drop very small.

Yes, having the highest possible quality data will be necessary to say anything substantive about the collapse initiation.

Interesting idea. I'll ponder this for awhile.
David B. Benson
Here is a summary of all the different force and stretch functions I have tried. dB denotes decibans, sd denotes weighted standard deviation in meters.

CODE

BV-ZS-F-pow-stretch      dB= 0.0 sd= 0.177

BV-Z+ZS+SS-F-pow-stretch dB= 0.0 sd= 0.178
BV-Z+ZS+SS-F-exp-stretch dB= 0.1 sd= 0.179
BV-sq-F-sq-stretch       dB= 0.2 sd= 0.181
Sef-ZS-F-pow-stretch     dB= 0.2 sd= 0.183
BV-s-F-funny-exp-stretch dB= 0.2 sd= 0.186
BV-sq-F-lin-stretch      dB= 0.2 sd= 0.184
BV-lin-F-sq-stretch      dB= 0.3 sd= 0.193
Sef-Z+ZS+SS-F-pow-stretch dB= 0.3 sd= 0.188
BV-Z+ZS-F-exp-stretch    dB= 0.4 sd= 0.189
BV-Z+SS-F-exp-stretch    dB= 0.6 sd= 0.202
Sef-Z+ZS-F-exp-stretch   dB= 0.7 sd= 0.197
Sef-Z+SS-F-exp-stretch   dB= 0.8 sd= 0.205
BV-s-F-s-stretch         dB= 0.9 sd= 0.202
BV-s-F-sq-stretch        dB= 1.2 sd= 0.210
BV-lin-F-lin-stretch     dB= 1.7 sd= 0.240
BV-const-F-sq-stretch    dB= 1.9 sd= 0.245
Sef-Z+ZS+SS-F-exp-stretch dB= 1.9 sd= 0.236
BV-const-F-lin-stretch   dB= 3.2 sd= 0.272
------ (very) strongly disconfirmed
BV-lin-F-const-stretch   dB=15.0 sd= 0.503
------ decisively disconfirmed
const-acc-no-stretch     dB=23.9 sd= 0.627 a=0.6117g
Sef-const-F-no-stretch   dB=30.8 sd= 0.701
BV-const-F-stretch0.14   dB=57.9 sd= 0.923
BV-const-F-stretch0.18   dB=58.1 sd= 0.924
BV-const-F-no-stretch    dB=58.3 sd= 0.931

By Bayesian principles, according to Harold Jeffreys, none of the first 19 hypotheses are disconfirmed. However, using a constant force and a constant stretch is decisively disconfirmed and even a linearly changing force with a constant stretch is (very) strongly disconfirmed.
OneWhiteEye
QUOTE (David B. Benson+Dec 1 2007, 08:31 PM)
In effect, I'll do that.
Excellent

QUOTE
The t0 I use occurs in frames 917--918.  The first cut at this is by fitting to the older data NEU-FONZE posted some time ago.  The parameter estimation then adjusts this, typically only by  a few milliseconds, but one hypothesis adjusts by 19.8 milliseconds.  All of these adjustments are to slightly earlier times.  Whatever value is chosen, it is, by definition, the moment of the onset of progressive collapse.  All earlier, negative times by this reckoning, are times of collapse initiation, when the motion is very slow and the drop very small.
Onset of progressive collapse... meaningful in a physical sense, yes? If arbitrarily high resolution and accuracy were available, would this point in time be discernable directly as a discontinuity in position data or first or second differences?

QUOTE (->
 QUOTE The t0 I use occurs in frames 917--918.  The first cut at this is by fitting to the older data NEU-FONZE posted some time ago.  The parameter estimation then adjusts this, typically only by  a few milliseconds, but one hypothesis adjusts by 19.8 milliseconds.  All of these adjustments are to slightly earlier times.  Whatever value is chosen, it is, by definition, the moment of the onset of progressive collapse.  All earlier, negative times by this reckoning, are times of collapse initiation, when the motion is very slow and the drop very small.
Onset of progressive collapse... meaningful in a physical sense, yes? If arbitrarily high resolution and accuracy were available, would this point in time be discernable directly as a discontinuity in position data or first or second differences?

Here is a summary of all the different force and stretch functions I have tried. dB denotes decibans, sd denotes weighted standard deviation in meters.
...

OneWhiteEye
I find these results quite impressive from the analytical perspective. Grasp of the subject matter is... gradual. My current level is (hopefully) meaningful commentary, limited primarily to a black-box perspective. Writing programs to apply the principles, no. So my questions about naive reasoning will still be naive.

It is interesting that hypotheses become records of data. Mind-boggling, in fact. It's immediately obvious that compositional application of Bayesian principles to produce meta-model data does not lend itself to simple order-ranking as does the first order product shown above. Instead, you'd expect that progressively improved analysis would sharpen the discrimination between hypotheses based on their ability to match observed data, even to the point of changing the ranking of hypotheses. A scalar measure could be derived, but only with posterior knowledge of correctness of the modeling process.

How difficult (or how much time) is it to run an input dataset to produce a table like the one above? Could you run each of C447-449 independently and produce three tables like that?
David B. Benson
QUOTE (OneWhiteEye+Dec 1 2007, 01:50 PM)
Onset of progressive collapse... meaningful in a physical sense, yes?

If arbitrarily high resolution and accuracy were available, would this point in time be discernible directly as a discontinuity in position data or first or second differences?

Not really. It is the best point in time to begin using the crush-down equations, either that of B&V or else that of Seffen, to obtain the best fit to the data. Physically, the onset is the buckling of the north wall over frames 910--920 which you noticed. (And NIST did not, or, at least, did not include in the NCSTAR1 report. Right, Arthur?)

I'm not sure. The buckling of the walls leads to an acceleration of merely 0.05g or so. But the acceleration at the start of the progressive collapse is about 0.85g. So perhaps this transition is detectable from second differences. But there is something a bit arbitrary about all this for times less than 33 milliseconds or even several times that.
OneWhiteEye
QUOTE
It is the best point in time to begin using the crush-down equations, either that of B&V or else that of Seffen, to obtain the best fit to the data.

Do the various identified hypotheses in the table all have the same value of t0 for purposes of fitness test, or are they associated with a peculiar value that gives the best result for each?

Sorry to bombard you with questions. I've got more.
David B. Benson
QUOTE (OneWhiteEye+Dec 1 2007, 02:24 PM)
It's immediately obvious that compositional application of Bayesian principles to produce meta-model data does not lend itself to simple order-ranking as does the first order product shown above.

Instead, you'd expect that progressively improved analysis would sharpen the discrimination between hypotheses based on their ability to match observed data, even to the point of changing the ranking of hypotheses.

How difficult (or how much time) is it to run an input dataset to produce a table like the one above?

Could you run each of C447-449 independently and produce three tables like that?

I agree, given my current understanding of Bayesian reasoning. There is a considerable literature on the subject, so something along a meta analysis might have been considered already.

Yes, although I should be most surprised if the rank order changed very much, except for the best few.

This is a mere 0.5 GHz Pentium III beside my desk. I write the SML/NJ, not the fastest language for numerical work. Producing that table required about 2 hours of machine time.

I could, but am unlikely to do so. The problem is that the naive Bayesian factor method works best when (1) there is lots of data and (2) the random error process is known. As for the latter, I have assumed Gaussian distributed (i.e., normal) errors. I known this is only approximate as thermal distortions (above a fire) are not so distributed. However, since there is so little red noise in the data, this naive assumption seems ok. But it becomes less ok when there is less data. While the Bayesian factor method is one of the sharpest statistical tests available for comparing hypotheses, there is no estimate (known to me) of the likelihood of having compared wrongly. The rule of thumb then is use lots of data.
David B. Benson
QUOTE (OneWhiteEye+Dec 1 2007, 02:53 PM)
Do the various identified hypotheses in the table all have the same value of t0 for purposes of fitness test, or are they associated with a peculiar value that gives the best result for each?

Sorry to bombard you with questions. I've got more.

The high-grade hypotheses each separately parameter fit t0 to minimize the weighted variance. The low-grade hypotheses have no ability to make such minor adjustments.

No need to apologize! I suspect that this Q&A will be of benefit to several people...
David B. Benson
Since the constant force, constant stretch hypotheses are decisively disconfirmed, other of the computer modelers might wish to move to using a better model for the (decreasing) stretch, about which I have posted recently. They also might wish to use a resistive force model which represents the consumption of a constant fraction of the available kinetic energy, this appearing to offer the best fit to the data, although only a little bit better than 17 other competing hypotheses. So in

F(m,v) = k*m*v

where m is the mass of the top block and the crushed materials (zones A + B in B & V's terms) and v is the speed, both in SI units (kg,m/s), the constant k is

k = 0.1346 s^(-1)

representing the instantaneous consumption of 26.92% of the available kinetic energy.

======================================================================
I see no apriori reason for this constant fraction. But it remains not only the best fit to the data, but one of the more parsimonious of the hypothesis. So I'll simply appeal to Ockham's Razor...

No crush-up during crush-down. There is no evidence for much of this is the data and the B&V crush-down equation with the above for the resistive force and the somewhat weird power law for the stretch (squash) does a d****d good job of fitting the data.
OneWhiteEye
SML, how cool is that? A nod to you, that's pretty obscure. I would certainly be willing to discuss a port to C++ and runs on a faster machine, but I also would assume that your work is (at least currently) proprietary and confidential, so just consider it a thought.

You can probably guess I'm interested in sensitivity of results to variations in input - in the more traditional sense. Since the idea of good and bad data has come up recently, I wondered what variation in data quality did to the output. After all, it is the benchmark against which judgment is made.

Given the low grade population has no fine adjust and is of far less interest anyway, the question remains if hypotheses can move from one group to another (large change) based on relatively small changes in the magnitudes of input data, particularly wrt to t0.

Edit: My last question could be rephrased as: Are the strongly rejected hypotheses incapable of comparable fit regardless of choice of t0? Are they incapable of matching the observed data in regions of high velocity (well after t0) assuming the data is accurate to some much smaller value?
David B. Benson
QUOTE (OneWhiteEye+Dec 1 2007, 03:30 PM)
SML, how cool is that?

I would certainly be willing to discuss a port to C++ and ...

You can probably guess I'm interested in sensitivity of results to variations in input - in the more traditional sense.

Since the idea of good and bad data has come up recently, I wondered what variation in data quality did to the output.

... the question remains if hypotheses can move from one group to another (large change) based on relatively small changes in the magnitudes of input data, particularly wrt to t0.

Edit: My last question could be rephrased as: Are the strongly rejected hypotheses incapable of comparable fit regardless of choice of t0?

Are they incapable of matching the observed data in regions of high velocity (well after t0) assuming the data is accurate to some much smaller value?

Cool. Mostly functional programming with type checking and type inference. Helps avoid a wide variety of programming and coding errors. Standard ML of New Jersey

No need. These are not production programs, but rather research programs which are frequently being improved. Run time is not an issue since when a run is started, I have plenty of other things which need doing.

Ok. First consider solving the crush-down equation via Runge-Kutta once a choice of parameters has been made:

(1-s(Z))(ZZ" + Z*Z) - Z = -F(Z,S)

where s(Z) is the stretch function and F(Z,S) is the resistive force. (I am just considering the B&V equation. Seffen's is similar.) This is an initial value problem in which at t0 there is some given Z0 and S0. But the ODE is so relaxed (i.e., easy) that there can be no numerical instabilities. Then the weighted variance from the measurements is computed. Select other values for the parameters and repeat. This continues until a local minimum in parameter space is found. Then go to the next hypothesis and repeat. Finally, compute the decibans.

The only comparison of 'data quality' I have done is somewhat subjective, comparing yours with NEU-FONZE's. Yours is enough better than I can obtain weighted standard deviations of 0.177 as opposed to about 0.5 for the older data-set.

No, the nature of the crush-down equation above is such that, for reasonable choices of stretch and force functions, small changes in the input, including the exact choice of t0, cannot produce large changes in the deciban computation, provided there is enough data. The data used to determine the crush-down fit is about 100 data points. That is quite a good quantity in my experience. In particular, regarding the choice of t0, the delta used by the high-grade hypotheses is never more than -19.8 milliseconds, which makes a rather modest change in the initial values, given an estimated acceleration in that period of about 0.05g.
OneWhiteEye
Great explanation, that clears up a lot, especially with regards to the well-behaved nature of the solution. I can exclude the low grade fits and confine discussion to competition between the better hypotheses.

QUOTE
Helps avoid a wide variety of programming errors.

I'll bet. I've found that programming genius helps as well.

QUOTE (->
 QUOTE Helps avoid a wide variety of programming errors.

I'll bet. I've found that programming genius helps as well.

Run time is not an issue since when a run is started.

OK. I only mentioned it because I'd suggested extra runs. Of course, exclusion of universally low-grade hypotheses will reduce the run time better than any environment change.

I understand the correspondence between set size and resultant quality/applicability. In that vein, what differences might there be if I'd only posted one dataset, say the largest one?

What would happen if I revised my pixel:floor ratio up or down a few percent?

Does your process require a single time/position table as input (pre-merged), or does it accept each dataset individually and merge the results as an intermediate step? If I understand correctly based on several statements you've made, the data is merged in advance and considered by the process as a single dataset, each point an independent observation, as opposed to each dataset being uniquely identified as a factor.

From perhaps a more classical perspective, the problem of determining the curve which is to be fit from observations provided is a distinct process from finding the best possible fit obtained from a given hypothesis. Is it still so in your method?
David B. Benson
QUOTE (OneWhiteEye+Dec 1 2007, 05:03 PM)
I've found that programming genius helps as well.

Of course, exclusion of universally low-grade hypotheses will reduce the run time better than any environment change.

In that vein, what differences might there be if I'd only posted one dataset, say the largest one?

What would happen if I revised my pixel:floor ratio up or down a few percent?

Does your process require a single time/position table as input (pre-merged), or does it accept each dataset individually and merge the results as an intermediate step?

If I understand correctly based on several statements you've made, the data is merged in advance and considered by the process as a single dataset, each point an independent observation, as opposed to each dataset being uniquely identified as a factor.

From perhaps a more classical perspective, the problem of determining the curve which is to be fit from observations provided is a distinct process from finding the best possible fit obtained from a given hypothesis. Is it still so in your method?

Remember what Edison said.

I ordinarily only run two or three hypotheses at a time.

Less discriminatory power in computing the decibans, I believe. Putting it another way, if there were, say, 300 data points, it might be possible that the hypothesis with 3.7 decibans would then be 5 dB down, enough to substantially disconfirm it.

Change the parameter values slightly. Nothing substantial.

Under the general and important rule to never to change the measured data, I actually read all three files and do a merge sort.

But in effect, the actual calculations are only done on the merged data, with each point being an independent observation.

The functional form of the force and stretch have been my estimates of physically plausible forms. This leads to having some parameters for the program to find the best fit. Stated another way, this is a numerical estimation program, not artificially intelligent.
QUOTE (David B. Benson+Dec 1 2007, 04:36 PM)
Not really. It is the best point in time to begin using the crush-down equations, either that of B&V or else that of Seffen, to obtain the best fit to the data. Physically, the onset is the buckling of the north wall over frames 910--920 which you noticed. (And NIST did not, or, at least, did not include in the NCSTAR1 report. Right, Arthur?)

I'm not sure exactly what you are asking me, but if I want TIMELINE analysis I usually find the best source is in NIST NCSTAR 1-5A.

Arthur
OneWhiteEye
QUOTE (David B. Benson+Dec 2 2007, 12:27 AM)
Remember...

Thanks again, this is very helpful to me.

Continuing the barrage of questions...

What is the tradeoff between amount of data and quality of data? In comparing my data with that provided by NEU-FONZE, you indicated the SD was reduced and classified it as better data as a result (it seems you have a meta-measure already!) Is it because of the amount of data or perceived/calculated accuracy?

Would the expected process accuracy be improved by merging his data with mine? Or does the 'bad' data contaminate the 'good' ? If the data from NEU-FONZE is, ha, trash, what's to say that one of the three I posted isn't from the same gutter? Is it because of the tight agreement between my sets?

(just kidding NEU-FONZE, the subject is fascinating but I need to vent a little out of the other hemisphere now and again)

If it is because our two collections fall in to different cluster groups, is that a prejudgment based on different standards and outside the analysis itself? (Edit: you already answered this above)

QUOTE
Under the general and important rule to never to change the measured data...

What gets included and excluded matter, too. Forgive me, these are very nitpicky questions, the process seems quite worthy. I just want to understand. What happens if the next dozen sets obtained fit into NEU-FONZE's cluster, leaving the original three I posted the new outliers?

To what extent are systematic errors accounted for?

If you could get datasets from many thousands of cameras at random locations all around the tower, not just at ground level but varied elevation including above, would it even be necessary for you to have the position values in meters? Or could the scale factor be derived from sheer numbers of observations against the known physical constraints? That may seem like a stupid question, but it seems reasonable if one considers the observations as containing systematic errors of arbitrary magnitude. Pixels differ from meters only by a scale factor across a sufficiently small region, however far over unity the scale factor may be.

(Edit: the above begs the question of establishing a coordinate system but, then, that is part of the issue)

Now, what is the effect of much smaller, under unity, but still appreciable percentages of systematic error? The relation to the measurement methods? Remember the F4?
NEU-FONZE
I have recently had very intense discussions with Prof. Bazant and other 9/11 researchers about what constitutes "evidence" in the scientific study of the WTC collapse events ( for WTC 1, 2 & 7); but this is perhaps a philosophical debate I am having, and for now let's simply accept the visual record we have.

Certainly I wish we had DIRECT measurements on the movement of the roof line of WTC 1 & 2 taken on Sept 11th, 2001 with electronic devices installed in the towers at the time of collapse. Obviously we do NOT have that kind of detailed quality data, so we are forced to rely on the available video records to reconstruct the collape profiles. This already introduces timing errors, but at least we have something to work with .... I have analysed several videos MANUALLY, as best I could, and I have given DBB these estimates. If there are errors, they are mostly from under - or possibly over - correction of perspective effects.
Chainsaw,
QUOTE (NEU-FONZE+Dec 2 2007, 02:43 AM)
I have recently had very intense discussions with Prof. Bazant and other 9/11 researchers about what constitutes "evidence" in the scientific study of the WTC collapse events ( for WTC 1, 2 & 7); but this is perhaps a philosophical debate I am having, and for now let's simply accept the visual record we have.

Certainly I wish we had DIRECT measurements on the movement of the roof line of WTC 1 & 2 taken on Sept 11th, 2001 with electronic devices installed in the towers at the time of collapse. Obviously we do NOT have that kind of detailed quality data, so we are forced to rely on the available video records to reconstruct the collape profiles. This already introduces timing errors, but at least we have something to work with .... I have analysed several videos MANUALLY, as best I could, and I have given DBB these estimates. If there are errors, they are mostly from under - or possibly over - correction of perspective effects.

NEU-FONZE,
You can not correct for perspective effects, with out fist correcting for lens distortion of the image, the light is bent, as it goes though the camera lenses, each lens distorts the image differently to some degree so errors are to be expected.
If you use more than one camera the lens distortion is multiplied.
No Camera known delivers a perfect image, the best that can be done is to do a reference model, of known perimeters, to alleviate the distortions.
I deal with prospective and lens destortion a lot, part of what I do.
IT may not apply here, however it might offer a solution to why their are mistakes, if the destortion caused by light passing though the lens were not taken into account.
I think simply that OneWhiteEye's approach is just better at taking lens distortion into account, when comparing the individual images, that would explain the discrepancies in the data.
I'm not sure the distortion issue is particularly applicable in THIS instance.

Consider that they are simply comparing the relative motion from one frame to the next where all frames are derived from the SAME lens.

Arthur

David B. Benson
QUOTE (OneWhiteEye+Dec 1 2007, 06:04 PM)
What is the tradeoff between amount of data and quality of data?

In comparing my data with that provided by NEU-FONZE, you indicated the SD was reduced and classified it as better data as a result (it seems you have a meta-measure already!) Is it because of the amount of data or perceived/calculated accuracy?

Would the expected process accuracy be improved by merging his data with mine? ... Is it because of the tight agreement between my sets?

What happens if the next dozen sets obtained fit into NEU-FONZE's cluster, leaving the original three I posted the new outliers?

To what extent are systematic errors accounted for?

If you could get datasets from many thousands of cameras at random locations all around the tower, not just at ground level but varied elevation including above, would it even be necessary for you to have the position values in meters?

Or could the scale factor be derived from sheer numbers of observations against the known physical constraints?

Remember the F4?

For the Bayesian factor method, quantity is important. Regarding quality, systematic errors are always bad, unless these are known in advance and so can be adjusted for.

NEU-FONZE had 15 data points. I'm using about 100 of yours. I suspect that, by comparison with yours, some of his later measurements have some form of error.

I hadn't thought about merging the two. I'll have to ponder that some more. Your three data sets are all in close agreement.

Seems most unlikely. If it does I'll think about it then.

The only systematic error corrected for is the camera location and hence determining the meters-per-pixel on the antenna tower. Using your careful location detection, I've resolved that issue. There is also the tilt of the antenna tower, which should eventually also be corrected for. Also, the camera is not absolutely directly north of the antenna tower, but this effect seems to me (I could be wrong) to be too small to bother to compensate.

Eventually everything needs to be reduced to meters in a consistent fashion;

Known physical constraints give the appropriate conversion from pixels to meters for each camera location.

"Remember the F4?" Huh?
David B. Benson
QUOTE (NEU-FONZE+Dec 1 2007, 07:43 PM)
... This already introduces timing errors, but at least we have something to work with ....

I have analyzed several videos MANUALLY, as best I could, and I have given DBB these estimates. If there are errors, they are mostly from under - or possibly over - correction of perspective effects.

The timing errors are not very important for WTC 1, especially since OneWhiteEye noticed the buckling of the north wall over frames 910-920. For me, this establishes the commencement of progressive collapse within a third of a second. Your estimate of t0 is at frame 917.7, if I can use that measure. That is very, very good! The parameter estimations which add or subtract a delta from this never change it by more than -19 milliseconds, which is still in frame 917. I'll say that we now know the collapse commencement with 33 mill seconds, which is more precise than the physical reality actually allows.

And thank you! With this newer data set I suspect that there are errors in the last few measurements, but have not analyzed the matter in any detail.

======================================
To continue to answer one of OneWhiteEye's questions, adding 15 data points clearly sharpens the ability of the Bayesian factor method to separate hypotheses. I'm not sure how much, but I suspect a logarithmic dependence in the quantity of data. However, I suspect that all the weighted standard deviations will increase if NEU-FONZE's data is added to the existing set.

Rather than do that right away, I first need to make use of the remaining data taken from the tower itself, and also consider the data that einsteen sent me some time ago.
David B. Benson
Assuming statistical independence, the ability of the Bayes factor method to separate hypotheses grows linearly in the quantity of data.
OneWhiteEye

Lens distortion is always present to some degree, but you're right adoucette, it's not the dominant source of error in my measurements, though it contributes to the overall error when using techniques like smear-o-grams.

As David B. Benson mentioned, the tilt of the antenna is an issue, one that is directly observed when comparing the distances between points on the antenna in the first few seconds. These curves, which were in a different post than the C447-449 data, are accurate (enough to derive the tilt over time reasonably well) but they measure two very close but different locations so naturally they don't agree. The pairs of points can be interpreted as two independent particles with three degrees of freedom or two points on a rigid body with five (ignoring rotation about the antenna long axis). The latter seems more useful to me, especially as it's true.

But all of that is projected onto a 2D image plane, with perspective and lens distortion error.

The one video I'm currently working on is very good in that it's as close to rectilinear as one could ever expect, under the circumstances. I think it's way better than the F4 video, which was produced as part of a controlled test.

The angle of view is around 7-8 degrees, making the projection onto the image plane pretty flat. There is a small but easily measurable deviation from vertical which is slightly different for the three long, true (?) vertical lines in the view but, most importantly, indicates large vertical motion on the axis of the tower would result in lateral translation of several pixels in the image plane.

The camera, it seems, is not quite oriented on vertical. There also is the matter of how true the edges of the tower were by the time this video starts. The other building in the shot provides some information to resolve this.

Horizontal, you have essentially the same thing, with more lines at the floors but with the added term of orientation of the tower wall face to the camera. The floors that should be the same height in real life are not in the image. A single scale factor derived from many floors just averages away all these details, nice for an approximation.

I did start to trace over all the known vertical and horizontals to make a corrective vector field but it was set aside in favor of (honestly) more interesting and useful things. For a final correction, yes, there probably will be an associated position-dependent scale applied.

Other videos that I've seen are not so ideal as this one. In fact, every other one is worse in every way, making this one clip very important.
OneWhiteEye
Once again, David B. Benson, thanks for the excellent answers. The picture is even clearer. Perhaps there's an analog to Bayesian reasoning in my brain.

QUOTE
"Remember the F4?" Huh?

In the F4 video, the target traversed the field of view from one extreme until just past the vertical plane defined by the optical axis. It's very likely this systematic error made a significant time-varying acceleration appear identically zero in a naive 2D extraction, though this is not yet proven.

The situation with the WTC1 video is similar if you swap horizontal and vertical, though the great distance and consequent small angles produce a much smaller effect. Not necessarily neglible, though, if you're splitting hairs between closely competing theories. It depends on the magnitude of the discrepancy, which can be calculated (see post above). Other errors are undoubtedly larger but, once those are eliminated, this will eventually have its turn.

Speaking of turns, now it's your turn: what did Edison say?

QUOTE (->
 QUOTE "Remember the F4?" Huh?

In the F4 video, the target traversed the field of view from one extreme until just past the vertical plane defined by the optical axis. It's very likely this systematic error made a significant time-varying acceleration appear identically zero in a naive 2D extraction, though this is not yet proven.

The situation with the WTC1 video is similar if you swap horizontal and vertical, though the great distance and consequent small angles produce a much smaller effect. Not necessarily neglible, though, if you're splitting hairs between closely competing theories. It depends on the magnitude of the discrepancy, which can be calculated (see post above). Other errors are undoubtedly larger but, once those are eliminated, this will eventually have its turn.

Speaking of turns, now it's your turn: what did Edison say?

systematic errors are always bad, unless these are known in advance and so can be adjusted for.

Ok, that's what I would have expected. This gets to the question of what happens when small deflections in a curve to be fitted are not due to noise but rather systematic error from switching between datasets. The C447-449 is manual extraction from three smear-o-grams. Individually, these curves are fairly smooth. But they do not agree because, with these smear-o-grams, there is no specific, highly-defined feature being tracked. Only in the most special of cases would they be expected to agree, certainly not here, and so the individual curves do diverge quite obviously towards the end.

Now, I understand you've said your process is not concerned with random deviations, and I see that the computed curve is smoother than the merged measurement field but, in a smear-o-gram, the feature present in the initial pixel changes columns over the course of the measurement, and new features are introduced. The crossover means these aren't independent sets, the ever-changing feature location means systematic errors of up to a few percent are expected, both inter-column and over time in the same column.

This is similar to the case of the other data from two different points on the antenna, but with the added complication that there aren't any well-defined features being followed.

QUOTE
Assuming statistical independence, the ability of the Bayes factor method to separate hypotheses grows linearly in the quantity of data.

Perhaps the most important part of this, with respect to merging the datasets, is the first part: are these three datasets independent? I say they aren't, but is that significant? This goes to the issue of quality versus quantity. In my experience with classifiers (a different thing, I know) operating on text, the algorithms failed to detect a perfect match when trained with only one positive example. The applicability, therefore accuracy, was dependent on the volume of training sets which, when sufficient, allowed fairly good discrimination of the most sloppy of input. With small numbers of examples, yet precise matching (accurate, noise free data) always possible, the classifier methods are the wrong solution for pattern matching.

The C447-449 data is not only smear-o-gram, but manually derived. This means less than a data point per frame, but the rendering is intrinsically smoothed (the human filter) and quite accurate for overall motion. By contrast, an automated method will produce a data point per frame, far greater resolution, with lots of small magnitude noise.

In terms of what I understand of your method, if I were to supply a sample of every other frame using an automated method, I'd get half the discrimatory power (measured in width of SD?) out of the method. But that's not the same as supplying a manual dataset with half as many points as the automated; do you get my point?

This is why I'd be very interested in the results of a run that utilized only the largest of the three sets.

More in a second.
David B. Benson
Since I have no understanding regarding why a constant fraction of the available kinetic energy is continually consumed for the first 3.82 seconds of the collapse of WTC 1, and this consumption provides the sole resistive force, I thought I would try causing this resistive force to vary linearly with the position of the crushing front measured as (Z-Z0).

Well, this provides two more hypotheses, one for the B&V crush-down equation and one for Seffen's. For the B&V case, the resisting force slowly declines, the angle being about a negative 50 minute of arc. For the Seffen case, the resisting force slowly rises at about the same angle.

Are these better? No, the B&V case is essentially the same as assuming a constant consumption and the Seffen is 0.1 dB down. Yawn.

============================
While that surprising discovery remains poorly understood, the power law (with exponent about 2.3) is only phenomenological and not derived from anything physical. (I had made a mistake in attempting to do so, which resulted in this power law, whose sole virtue is helping to provide a good fit to the data.)
OneWhiteEye
-continuing from previous post-

I really thought there were many more points in one of them but they're all about the same. So, never mind about running the 'biggest' one.

The other concerns remain. What is the effect of merging these datasets directly then accounting by weighted variance when they aren't expected to agree? How would it compare to do pre-adjustment in the form of creating a composite set based on, say, a best polyfit of each linearly superposed and resampled to fifty points?

Yes, I see that violates the precept of 'monkeying with the data' but also satisies another by adjusting to reduce known, systematic errors. Which wins?
OneWhiteEye
Turnabout: if I've pulled up the correct datasets, one is slightly larger than the other two. Does one highly accurate dataset outweigh even three times as many points that have significant disagreement but are individually accurate because they look at different things? If there's even a chance that they are comparable, I'd love to see a run of the biggest.
David B. Benson
QUOTE (OneWhiteEye+Dec 2 2007, 03:01 PM)
Perhaps there's an analog to Bayesian reasoning in my brain.

... if you're splitting hairs between closely competing theories.

Speaking of turns, now it's your turn: what did Edison say?

... certainly not here, and so the individual curves do diverge quite obviously towards the end.

Now, I understand you've said your process is not concerned with random deviations, and I see that the computed curve is smoother than the merged measurement field but, in a smear-o-gram, the feature present in the initial pixel changes columns over the course of the measurement, and new features are introduced. The crossover means these aren't independent sets, the ever-changing feature location means systematic errors of up to a few percent are expected, both inter-column and over time in the same column.

Perhaps the most important part of this, with respect to merging the datasets, is the first part: are these three datasets independent? I say they aren't, but is that significant?

... will produce a data point per frame, far greater resolution, with lots of small magnitude noise.

In terms of what I understand of your method, if I were to supply a sample of every other frame using an automated method, I'd get half the discrimatory[sic] power (measured in width of SD?) out of the method. But that's not the same as supplying a manual dataset with half as many points as the automated; do you get my point?

This is why I'd be very interested in the results of a run that utilized only the largest of the three sets.

Bayesians do so claim, some of them.

The Bayesian factor method won't help here unless there is a massive amount of data.

Edison said something to the effect that genius was 99% perspiration and only 1% inspiration.

But even so the weighted standard deviation is only 0.177 meters. So I don't see any difficulties.

No, but in the naive Bayesian factor method, all such errors are treated as Gaussian noise. I fail to follow why you consider those to be systematic errors.

I don't think it is significant.

Sounds better. The small magnitude noise is not problematic in the slightest.

Half the discriminatory power measured in dB produced. And no, I don't 'get your point'.

Sorry, not following this. What is the reason for this interest?
David B. Benson
QUOTE (OneWhiteEye+Dec 2 2007, 03:56 PM)
Does one highly accurate dataset outweigh even three times as many points that have significant disagreement but are individually accurate because they look at different things?

In principle, no. Imagine two competing hypotheses H and K and the data from a run of an experiment. Suppose H is 1 dB better than K. Suppose one ran 'the same' experiment two more times. Assuming statistical independence of the errors in the three runs, one has that H is 3 dB better than K.

What will happen using only 1/3 of the pixel column 447--449 data is that the weighted standard deviation will be smaller, the parameters will be slightly different, but the discriminatory power will be less. Although more than a third of the current deciban values due to, at least, the parameter adjustment.

Since the maximum drop almost 46 meters, the best standard deviation of only 0.177 is only 0.38% of the maximum drop. That's a good fit despite combining the three sets.
OneWhiteEye
QUOTE
genius was 99% perspiration and only 1% inspiration

Ah yes. Of course.

Your second post explains almost to satisfaction. I can appreciate the whys as well as the eventual result being a good fit practically.

In a nutshell, I'm looking at a calculated column that is bumpy, where none of the inputs were. If it were bumpy because of noise, that would be fine, but it's bumpy because observations of three (slightly) different things are combined. Though it provides a good fit, can it be made better? I want to make sure that 3 times the number of non-independent points under these circumstances is indeed better than one nice smooth set.

Say I provide two datasets, both manually derived, both with 50 points, both measuring vertical deflection, but one feature is a point on the antenna, the other a point on the NE corner. Do you merge them?

One thing I might need to make clear is that only my laziness prevents you from having a hundred points for each column. Would you merge them?

Or

I can go back to the clearest of the three columns and add more points interstitially until there are as many as the combination of the three. It's not intepolation, they'd be real observations and at least consistent within the constraints of a smear-o-gram.

In the one current case you have three sets of observations that are not of the same feature simultaneously. In the other, a single coherent set of observations equivalent to one channel of the above data. Same number of points.

What sort of difference is expected?
David B. Benson
QUOTE (OneWhiteEye+Dec 2 2007, 04:54 PM)
Though it provides a good fit, can it be made better?

Say I provide two datasets, both manually derived, both with 50 points, both measuring vertical deflection, but one feature is a point on the antenna, the other a point on the NE corner. Do you merge them?

What sort of difference is expected?

That is usually helpful in the main goal: disconfirming various hypotheses, ideally leaving only one as the obviously most explanatory of the data.

That is what, in effect, I plan to do with the data from various points along the roof line. It won't be a simple merge, since it will be necessary to adjust for the known systematic differences between antenna tower data and roof line data. (Maybe I've handled that already.)

Since all of the hypotheses are likely to have smaller weighted standard deviations, that wouldn't necessarily produce any more discriminatory power.
OneWhiteEye
QUOTE (David B. Benson+Dec 3 2007, 12:05 AM)
That is usually helpful in the main goal: disconfirming various hypotheses, ideally leaving only one as the obviously most explanatory of the data.

That is what, in effect, I plan to do with the data from various points along the roof line.  It won't be a simple merge, since it will be necessary to adjust for the known systematic differences between antenna tower data and roof line data.  (Maybe I've handled that already.)

Since all of the hypotheses are likely to have smaller weighted standard deviations, that wouldn't necessarily produce any more discriminatory power.

OK, now I think I can explain my point better. Even in the case where you are considering aggregating points on the roofline, there must be some adjustment to account for known systematic differences BEFORE they can be treated as a single dataset.

I claim the same is true of the three pixel columns to a lesser, but unknown, extent. It may not be enough to make a bad fit, but it is a systematic error in theory, and one which cannot be corrected in practice, at least not easily.

I would say the differences due to spatial separation in the pixel column smear-o-grams could rival that of differing points on the roofline. For all I know the smear-o-gram curve 'climbs' down the antenna two meters in the first three seconds!

If shortage of points is the issue, better in my mind that I should give you a hundred points of the (almost) same thing than 33 each of three different, but partially coupled, things.

The automated method will provide a point per frame, so any feature will automatically have enough points for good discrimination... the noise is of no consequence as you stated. It could even be filtered if it were, but it will be very small compared to deflection except where deflection rate is so small that it doesn't matter there, either.

OneWhiteEye
QUOTE (David B. Benson+Dec 2 2007, 11:16 PM)
I fail to follow why you consider those to be systematic errors.

Allow me to explain, then. First of all, they're not random errors. That doesn't say what they are, but they're not the same as fluctuations from smoothness BECAUSE of random errors. Treating them as random errors may be perfectly fine.

A random error would be the jiggle of my hands due to too much caffeine, when placing the points. Or fuzziness of pixels, either from the source or not having my glasses that day; that's random.

Now, for a moment, forget the difference between x,y plotting in the image plane and smear-o-grams, though that difference is mighty. A subject to itself.

If I produce 3 curves that are fairly accurate measurements of different things, I cannot expect them to match. Suppose they don't. If I do a simple curve fit of each in turn and find I can get a real good fit of each, then the fit I will get by merging them will not be as good and won't necessarily capture the character of any of them. Maybe quite a difference in fitness - that's for regular regression or similar, of course.

Maybe systematic is off the mark. In such a case, this is a procedural error. I'm looking at different things, I have no business merging them. The practical effect is identical to a systematic error; unfortunately not just a monotonic offset or scale adjustment over time but also the introduction of oscillation that stems from alternating between multiple smooth curves that don't agree because they observe different things.

It could be that the features I measure are very close and all part of the same rigid body. I might be able to justify aggregating them in advance of any fit, as you consider with the roofline, but I can't just throw them in unadjusted.

A noisy sensor measuring one feature at 3x Hz is not the same thing as 3 accurate and stable sensors measuring three different things at 1x Hz. In your treatment, it may come out the same way, if we were tracking a feature, but...

Again, smear-o-grams don't even track a feature. They provide a convenient curve for making manual measurements. That curve, however, comes from no one thing or place.
OneWhiteEye
QUOTE (David B. Benson+Dec 3 2007, 12:05 AM)
Since all of the hypotheses are likely to have smaller weighted standard deviations, that wouldn't necessarily produce any more discriminatory power.

But anything that changed the characteristic (degree, order, etc) of the curve could cause quite a change in the hypothesis ranking, couldn't it? Even lead to good fits being rejected soundly? Just curious. Sometimes there's not a big difference between given 2nd and 3rd order polynomials wrt value over a limited range, but they imply quite a different underlying model. Small corrections could lead to favoring one over the other.

No additional discriminatory power is needed if the picture is already clear.

einsteen
Shagster,

Thanks. It looks like your discrete calculation gives the same answer. Yes
it can be called 'discrete algebraic' because in fact no numerical methods
are used to estimate it, it is cumbersome to work out by hand but the
result is indeed an exact value.

You gave this

E1/m (J/kg), t (s), v (m/s), x (m)

0, 5.96, 58.4, 0
5, 6.66, 12.7, 0
10, 7.74, 0, 5
15, 8.88, 0, 21
20, 10.0, 0, 46
25, 11.1, 0, 78

I found something interesting, if I use the maple discrete calculation I get

E1/M, t(s)

0, 5,96
5, 6.63
10, 7.73-0.23i
15, 8.90-1.04i
20, 10.00 - 2.33i
25, 11.07 - 4.21i

The real part then is the collapse time, which is almost the same as your values.
OneWhiteEye
Hi einsteen. What I post next should not be taken as a criticism of the smear-o-gram technique, just a cautionary tale. The genius of this technique is what drew me into this in the first place.
einsteen
Well, I like critic, don't hesitate... I still have to read the rest of the messages

Shagster,

Here is the plot of the above post

http://i12.tinypic.com/6z87dch.gif

A strange function, smooth if the collapse is complete, but after that it isn't?
OneWhiteEye
I've prepared some animations to demonstrate what I mean concerning smear-o-grams and tracking a feature like the dark band on the antenna. First have a look at the dark band:

http://i6.tinypic.com/7wtnuow.png

Notice the band is not straight. Next, look at this graphic which represents a simplified antenna/band arrangement emulating the curve of the dark band. The 'antenna' in this animation falls at a constant speed straight down:

http://i12.tinypic.com/7ypmujk.gif

A vertical green line in the animation above indicates the pixel column from which this smear-o-gram is made:

http://i9.tinypic.com/7ypq2kj.png

The smear-o-gram shows a straight line, as expected for the ideal case of pure vertical motion. Now, let's add some constant speed horizontal translation to the left and see the animation:

http://i7.tinypic.com/7x9xn4j.gif

Notice the place where the green line (pixel slice) intersects the dark band changes as the antenna drops. This leads to a smear-o-gram like this:

http://i6.tinypic.com/8dxnp6g.png

It may not be obvious but this one is not straight. In even a manual extraction, the data will reflect this to the tune of several pixels of non-uniform distortion. To make the differences apparent to the eye, I've put them together in a blown up animation:

http://i2.tinypic.com/6ytw5f4.gif

In actuality, the antenna rotates as well. An animation showing rotation about one axis, no horizontal translation:

http://i1.tinypic.com/8787gw9.gif

And the associated smear-o-gram:

http://i5.tinypic.com/7xucapv.png

In this last one, the rotation adds a net vertical translation, so this is not expected to be straight. It does illustrate what happens when you mix rotation, displacement, irregular features, and smear-o-grams. All three grams together:

http://i5.tinypic.com/7y7glxh.gif

None of the above addresses specifically the issue of merging data from adjacent columns. Simply consider the observed distortion is apportioned to the columns in a time-varying fashion.

My point is that anything introducing non-random, non-constant, low-frequency drift of up to several pixels HAS to affect the output of an analysis with many equivalent contending hypotheses.

-----

A 2D extraction, manual or automatic, produces two-axis displacements of a specific feature, wherever it may go in the image.
einsteen
That's a great presentation. i'm wondering, you said that band is not straight, that must be due to perspective effects.

http://www.waarheid911.nl/antenna_northtower.jpg

I remember that I used one of the dots on the antenna, you posted this before

http://i12.tinypic.com/2gvl991.jpg

and your method to follow the antenna is much better than the smear-o-grams. If we go back to the F4 it is clear that a parallax effect might play a role, if you look at the 1st photo then you see that it is already the case with this camera combination.

ps.

Here are 2 pictures of the antenna remains

http://911research.wtc7.net/wtc/evidence/p...s/hanger17.html

it was a monster of an antenna, they had to place some accelerometers in it during the construction to measure the sweeping...
einsteen
Neu,

I liked your explanation of the crush-down crush-up, in fact you add the static force together with the crushing force. E1/h is roughly 1/3 of the static force in a homogeneous situation, but can we really do it in this way ?
NEU-FONZE
Einsteen:

Thanks!

Well the reactive force is initially like a spring force but there will be strain-rate effects to consider as the impact velocity increases down the tower. It all gets very complicated at that point!
einsteen
yeah. I have to admit that, in spite of the fact that I finished a study in physics (no applied engineering etc) years ago, some of these (in the first instance) simple looking things are not really simple.. If you look for example at a normal crush-up where for example the E1s are equal and the masses also then your example is able to explain why the story at the bottom breaks first, but if we look at the crush-down then I would say there is no difference between the value M_block*g for a moving mass or a static mass. The constant E1/h force is tricky, it should be a peak force for which the integral equals E1. The problem with the example is that if you add the static force during the crush-down, some stories lower have a larger netto force, but I could be terribly wrong... The problem is in fact a kind of coupled springs problem, no real springs of course but springs with complex F(y) functions.
NEU-FONZE
Einsteen:

Yes, a good "real-world" example is a long freight train with say 110 boxcars. If the engine at the front accelerates, or applies the brakes, a complex set of longitudinal compression waves travels up and down the length of the train. However, this is not exactly the same as the floors in the twin towers because the impacts of the collapsing floors were not constrained to line up like the buffers on the front and back of box cars. This gets us back to Gordon Ross and all that stuff we have been over many times! One more point: I think the core columns near the top of the towers were not all that strong - for example, some of the wide flange columns were only about 1/2 inch thick.
David B. Benson
Here are the results of running all the hypotheses using just the pixel column line 447 data:
CODE

BV-Z+linZS-F-pow-stretch  dB= 0.0 sd= 0.178

BV-Z+ZS+SS-F-pow-stretch  dB= 0.0 sd= 0.178
Sef-Z+ZS-F-pow-stretch    dB= 0.0 sd= 0.178
BV-Z+ZS-F-pow-stretch     dB= 0.0 sd= 0.178
BV-Z+ZS+SS-F-exp-stretch  dB= 0.1 sd= 0.179
BV-sq-F-sq-stretch        dB= 0.2 sd= 0.181
BV-s-F-funny-exp-stretch  dB= 0.2 sd= 0.186
BV-sq-F-lin-stretch       dB= 0.2 sd= 0.184
BV-lin-F-sq-stretch       dB= 0.2 sd= 0.193
Sef-Z+ZS+SS-F-pow-stretch dB= 0.3 sd= 0.188
BV-Z+ZS-F-exp-stretch     dB= 0.4 sd= 0.189
BV-Z+SS-F-exp-stretch     dB= 0.6 sd= 0.202
Sef-Z+ZS-F-exp-stretch    dB= 0.6 sd= 0.197
Sef-Z+SS-F-exp-stretch    dB= 0.7 sd= 0.205
Sef-Z+linZS-F-pow-stretch dB= 0.8 sd= 0.204
BV-s-F-s-stretch          dB= 0.9 sd= 0.202
BV-s-F-sq-stretch         dB= 1.2 sd= 0.210
BV-lin-F-lin-stretch      dB= 1.7 sd= 0.240
BV-const-F-sq-stretch     dB= 1.8 sd= 0.245
Sef-Z+ZS+SS-F-exp-stretch dB= 1.9 sd= 0.236
BV-const-F-lin-stretch    dB= 3.2 sd= 0.272
BV-lin-F-const-stretch    dB=14.9 sd= 0.503
const-acc-no-stretch      dB=23.8 sd= 0.627
mB-const-F-no-stretch     dB=30.7 sd= 0.701
BV-const-F-stretch0.14    dB=57.6 sd= 0.923
BV-const-F-stretch0.18    dB=57.8 sd= 0.924
BV-const-F-no-stretch     dB=58.0 sd= 0.931

Note the lower discriminatory power. For example, BV-lin-F-const-stretch is now only strongly disconfirmed as opposed to being on the border with very strong disconfirmation.

Nonetheless, the descriminatory power went down much less than I had expected. I'll have to think about that. However, the weighted standard deviations did not substantially change, offering a justification for using the merged data sets.
David B. Benson
Oops! Ignore the previous post.

I would delete it if I could.
OneWhiteEye
QUOTE (David B. Benson+Dec 3 2007, 09:18 PM)
Oops!  Ignore the previous post.

I would delete it if I could.

Don't sweat it. I have private freak-outs when I see some of the horrible errors (Edit: mistakes - do you see?) I've made. I could spell-check, but I don't. (Edit: spell check would only catch the most trivial of my errors, anyway)

If, otherwise, it's because of a different error, I can't imagine what it would be.
OneWhiteEye
QUOTE
Here are the results of running all the hypotheses using just the pixel column line 447 data

Thanks for indulging me. It helped me to get a better feel for the process and is very educational. It will be a bit before I can comment meaningfully about the results, if ever.

QUOTE (->
 QUOTE Here are the results of running all the hypotheses using just the pixel column line 447 data

Thanks for indulging me. It helped me to get a better feel for the process and is very educational. It will be a bit before I can comment meaningfully about the results, if ever.

...the weighted standard deviations did not substantially change, offering a justification for using the merged data sets.

(not that it matters but) It seems a good justification and works for me.

In further consideration, it occurred to me how merging the columns might give better results independent of the requirements of the Bayesian method. I just made a case for the smear-o-grams being subject to second-order effects, currently neglected. It's possible merging tends to minimize the effect of such anomalous drift. The concerns about a smear-o-gram delivering the necessary accuracy remain in place.

I think it goes to the bigger question of what constitutes acceptable input for this method. I'm quite sure you're keenly aware of most of the issues involved in your process, the consequences of assumptions and provisional input, but it isn't so for me and I again have to ask your patience.
OneWhiteEye
QUOTE (einsteen+Dec 3 2007, 10:18 AM)
That's a great presentation. i'm wondering, you said that band is not straight, that must be due to perspective effects.

Yes. And more, the old antenna pictures don't quite match the later ones. Some things got changed around in that area over time, a different appendage added. It's not a simple ring around the antenna, though it looks like one. The dark band shape is even more complex than my graphic (as is the antenna motion!), although the graphics are a very good approximation to the real situation.

The geometric effect illustrated makes straight line motion into a curved dataset. The F4 perspective effect apparently did the opposite, and this is a separate issue that also needs examination.
David B. Benson
QUOTE (OneWhiteEye+Dec 3 2007, 02:26 PM)
Why?

Because that is not from running with just the pixel column 447 data!
OneWhiteEye
QUOTE (David B. Benson+Dec 3 2007, 09:54 PM)
Because that is not from running with just the pixel column 447 data!

Oh!

Well, I must inform you that it wasn't educational and failed to give me a better sense of what's going on! haha

I'm really glad I didn't say something stupid like "Indeed, this is as I expected..."

But it's probably a good time now to say it is. Now, hopefully, I turn out right. (Edit: There is a chance that merging the columns, as I said, would work to reduce some of the peculiarities of inherent drift but, if so, I'd expect the difference in results to be expressed by a change in proximity of adjacent ranks. Not enough necessarily to change rank order, but in that direction. Aggregate measure- discrimination and deviation - about like what you reported for the wrong run.)
David B. Benson
'Molten' steel at Ground Zero

is well done and most sensible.

NEU-FONZE will be pleased to see that there most assuredly was frozen, previously liquid, steel present.
David B. Benson

This assumes the camera was 1545 meters directly north of the north wall of the tower, along the ground, and was supported 1.25 meters above the ground. Theta is the angle to the roof line.
David B. Benson
For the pixel column lines 447--449 data, about how far up the antenna tower is the measurement area? I don't need this to great accuracy, but it seems that it is many meters above the roof line and this will make a difference in proper scaling of the pixel data.

Just for now I'll guesstimate 27 meters.
metamars
QUOTE (metamars+Oct 23 2007, 04:30 AM)
I've taken a look, in the last couple of days, at a paper called Experimental and Theoretical Studies of Columns Under Axial Impact by Ari-Gur, Weller and Singer (pub. in Int. J. Solids Structures Vol. 18 No. 7 pp619-641 1982)

This paper has relatively simple derivations of displacements due to buckling of columns under a dynamic load.

Figure 2 in this paper plots both compressive strain vs. time, as well as bending vs. time (stacking the graphs so that their time axes match up.)

http://metamars.i8.com/arigurstrainvstime.JPG

If you click on it, it will render more clearly.

I'm not sure that I understand the bending vs. time graph. Apparently, about 1/4 m sec after the peak compressive impulse passes, the bending strain hits it's first extremum. It then oscillates around 0 bending stress, I would guess due to a "whipsaw" effect that comes from elastic loading in the transverse direction.

If this interpretation is correct, then it implies (I think) that energy is being dissipated in plastic deformation not only downwards, but upwards, also, with the net deflection being downwards.

I would further guess that, in the regime tested, there is almost as much energy dissipated in the 'down-stroke' plastic deformation stages as in the 'up-stroke' plastic deformation stages.

All other things being equal, I would therefore guess that this will almost quadruple the energy sink associated with plastic deformation in the dynamic regime studied, vs. the quasi-static deformation assumed to occur performed during the "usual" calculations as implied by Bazant Zhou and carried out by Newton's Bit (on JREF).

Can anybody enlighten us as to the correct interpretation of the bending vs. time graph?
metamars
QUOTE (metamars+Nov 11 2007, 08:01 PM)
The problem of a rod fixed at one end and struck longitudinally on the other end is solved in A Treatise on the Mathematical Theory of Elasticity by A.E.H. Love (p. 431 - 435). Further details to the theory are contained in references dated 1916 and 1919. This purely elastic theory assumes that all elastic wave energy is reflected back into the rod when the waves hit the fixed end.

This problem is also solved in Impact - The Theory and Physical Behavior of Physical Solids by Werner Goldsmith, starting on p. 46. I must say, his solution seems more complicated, though his goal is to figure out what time intervals for arresting motion in the direction of impact for various mass ratio (striking mass : mass of rod ) intervals.

Unfortunately, the mass ratios that he explicitly solves for are too low to have any relevance in the WTC case, and computing them by hand looks extremely tedious. I wish I knew mathematica....

metamars
I saw my mathematician cousin on Thanksgiving holiday. I brought my copy of "A Treatise on the Mathematical Theory of Elasticity" by Love. It turns out that he had had this same book as a textbook.

I asked him about what solutions to a problem similar to the section "281 Rod fixed at one end and struck longitudinally at the other" on p. 431. The main difference I asked about was "what if the rod is non-uniform, and instead cross-section varies linearly with length, by a factor of 16". He said that wouldn't make much difference, as either end was much smaller than the length.

He also told me that the longitudinal and transverse vibrations are loosely coupled, and typically the longitudinal vibrations are ignored. (Love's solution assumed a purely longitudinal wave propagation.)

I then told him that the problem I really wanted to solve (he had figured out I was interested in the WTC collapse, which made him resistant to going deeply into it, unfortunately) was not of a 111-story rod which is free to buckle anywhere along it's length, but rather of an 111-story non-uniform rod which is constrained in the x-y plane at every floor height. He told me that definitely this would have to be solved numerically, using finite-difference methods. (of course, the problem to solve would be of for less than the full 111 stories.)

Although we didn't discuss it, I assume that bracing on each floor will make longitudinal vibrations much more important, perhaps to the point where they can't be ignored.

wcelliott
QUOTE
Although we didn't discuss it, I assume that bracing on each floor will make longitudinal vibrations much more important, perhaps to the point where they can't be ignored.

Another point (I don't know if it's been addressed) is that the interior sides of the exterior perimeter columns would've been hotter (and therefore have different plastic characteristics) than the outsides of the exterior columns. Even perfectly-longitudinal forces would be more likely to induce buckling under these circumstances than an idealized "spherical-chicken" model.
OneWhiteEye
QUOTE (David B. Benson+Dec 3 2007, 11:26 PM)
For the pixel column lines 447--449 data, about how far up the antenna tower is the measurement area?  I don't need this to great accuracy, but it seems that it is many meters above the roof line and this will make a difference in proper scaling of the pixel data.

Just for now I'll guesstimate 27 meters.

I don't know, I haven't looked at that yet. That looks like a pretty good guess.

The placement of the antenna components could be determined from other photos, but it would be a lot easier if a diagram were available.
OneWhiteEye
Tracking for data starts on the antenna near the top of the frame, can't go much past about mid-view because of smoke, but floor height measurements for scale DO extend to the bottom of the frame, so it is useful to know the change of vertical scaling from top to bottom of frame.

It is somewhat academic, since the floor heights can be measured directly to determine actual coordinate mapping with ALL sources of static distortion handled (and no simplifications for analysis).

Assume the following:

- the horizontal angle of view in the video is 7.5 degrees (probably close - approximated from ray tracing)
- camera azimuthal angle wrt normal to wall surface is zero in tower-centered coordinates (as in DDB post above; not true but close)
- camera at ground level (not true but very close)
- each pixel represents an equal portion of the view angle (not true but maybe good enough)
- ground distance from tower wall to camera = 1560m (close - taken from ray tracing depth coordinate)

The aspect ratio of the frame results in a vertical angle of view of 5 degrees. The roofline is 1/4 of the way down from the top of the image. With an angle of 14.966 degrees, camera to roofline, I'll assign the bottom and top of the image the angles 11.216 and 16.216, respectively.

I really don't want to show all my math (it was enough to DO it) but I get the following relation for change of vertical height delta_y as a function of base angle theta and change of angle delta_theta:

delta_y = z*(tan(theta + delta_theta) - tan(theta))

where z is the ground distance from tower to camera, 1560m.

This can be evaluated at the angles of the image top and bottom for a given delta_theta and compared to see the ratio of scale. A natural delta_theta is the angle corresponding to one pixel, so dividing the view angle by the image height, the calculated delta_theta is (5/480) ~ 0.0104 degrees.

For the bottom of the image, delta_y ~ 0.295m
For the top of the image, delta_y ~ 0.308m

The ratio of top to bottom is 1.044.

These results do not match with measurements taken from the video, which show 0.255m per pixel average over 18 floors, so either the assumptions and/or my calculations are wrong, but this is the type of calculation that could be done to show the change of scale across the image. Even though my magnitudes are off by about 18%, the ratio could be in the ballpark.

A (>) four percent difference from top to bottom in scale factor. If true, significant?

(I'll redo this in a better fashion someday)
OneWhiteEye
Re-running the calculation with a vertical view angle of 4.25 gives:

bottom delta_y = 0.252
top delta_y = 0.261
ratio = 1.038

which is more in line with the measurements from the video.

This is quite realistic. Of all the assumptions above, the most flaky was the view angle, which I obtained from a rendered scene. I'd had problems with getting floor height to match (too many floors) in that scene and know I failed to account properly for aspect ratio in doing the rendering. When I go back to rendering, I'll correct the problems and plug in this slightly smaller view angle. Everything, at that point, might fit like a glove.
QUOTE (David B. Benson+Dec 3 2007, 05:14 PM)
'Molten' steel at Ground Zero

is well done and most sensible.

NEU-FONZE will be pleased to see that there most assuredly was frozen, previously liquid, steel present.

I think this is the Link that David attempted to post:

http://11-settembre.blogspot.com/2007/12/m...ave-simple.html

Arthur
einsteen
Nice photos, is that supposed to be the pools or fountains ?
Chainsaw,
QUOTE (einsteen+Dec 4 2007, 10:19 AM)
Nice photos, is that supposed to be the pools or fountains ?

Does not matter it is Fe304 from oxygen cutting of steel not molten Iron or molten steel easy to tell the difference.
David B. Benson
QUOTE (adoucette+Dec 3 2007, 11:12 PM)
I think this is the Link that David attempted to post:

http://11-settembre.blogspot.com/2007/12/m...ave-simple.html

Arthur

Yes. Thank you.
PhysOrg scientific forums are totally dedicated to science, physics, and technology. Besides topical forums such as nanotechnology, quantum physics, silicon and III-V technology, applied physics, materials, space and others, you can also join our news and publications discussions. We also provide an off-topic forum category. If you need specific help on a scientific problem or have a question related to physics or technology, visit the PhysOrg Forums. Here you’ll find experts from various fields online every day.