To add comments or start new threads please go to the full version of: Physics Of 9/11 Events - Part 3
PhysForum Science, Physics and Technology Discussion Forums > General Sci-Tech Discussions > Other Sci-Tech Topics
Pages: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116

David B. Benson
Here is a variation in which the t0 time parameter is adjusted for best fit, again using the pixel column 447 data:

CODE
BV-K+ZS-F-exp-pow-stretch dB= 0.0 sd= 0.100

Sef-K+ZS-F-exp-pow-stretch dB= 0.7 sd= 0.131


There is no substantial change in the Seffen case, but in the BV case, starting a mere 1.7 milliseconds later, also slightly different other parameters, reduces the sd from 0.116 to 0.100.

For both, K=0 was forced to hold.
David B. Benson
Another variation in which only constant force is used. (ZS parameter forced to be zero.)

CODE
BV-K+ZS-F-exp-pow-stretch dB= 0.0 sd= 0.252

Sef-K+ZS-F-exp-pow-stretch dB= 0.6 sd= 0.329


Notice how much poorer the fit is. I take this as further confirming evidence against a constant resistive force.

By the way, notice that a resistive force of

F(Z,S) = kZS

is the constant k time the momentum, ZS. Dunno why... sad.gif
OneWhiteEye
QUOTE (David B. Benson+Dec 9 2007, 12:03 AM)
Good.  Feel free to ask more when you are of a mind to do so.

Thanks for all the info.

QUOTE
Yes.  I am quite confident that a top-grade hypothesis should be able to distinguish the two.

Certainly seems reasonable.

QUOTE (->
QUOTE
Yes.  I am quite confident that a top-grade hypothesis should be able to distinguish the two.

Certainly seems reasonable.

Useful to the structural engineering community in designing tall buildings which meets FEMA's building safety criterion, i.e., the building does not collapse due to high magnitude earthquake, for example.

I wasn't too clear in what I said. I meant any hypothesis which is unable to distinguish between WTC1 and Landmark is no good... with the one exception where the proper assignment of parameters (E1, h, whatever) for the respective buildings, BY PURE CONCIDENCE, call for nearly identical displacement data.

QUOTE
Exactly.  Since that is certainly not what was observed, this combination of force function and stretch function is decisively disconfirmed.  However, this is what most modelers have been using up until now, although with non-zero parameters.

Interesting.

What are your thoughts on error due to rotation of the antenna? And what is the stance of BV equations wrt rotational energy in the 1-D model? I mean specifically as it relates to making measurements which have both rotational and translational components.
newton
QUOTE (wcelliott+Dec 9 2007, 08:32 PM)
newton lives in a much more interesting universe than the rest of us.

yes. i don't know everything, which means life is still full of wonder for me.

i'd hate to be as omniscient as some of you OCTs seem to be.
einsteen
QUOTE (OneWhiteEye+Dec 10 2007, 12:41 AM)
What are your thoughts on error due to rotation of the antenna? And what is the stance of BV equations wrt rotational energy in the 1-D model? I mean specifically as it relates to making measurements which have both rotational and translational components.

That is a very interesting remark. If I understand everything well, it is your data from the antenna measurements. The perspective effect has been corrected, but the rotation is still a problem. And even if it is possible to correct for rotation then we get an other problem, i.e.

1) Does the data corrected for rotation represent a model in which a non-tilted block falls perfectly straight down.

2) Does the data corrected for rotation represent a tilted block falling perfectly straight down.

From the smear-o's that I posted not much difference could be seen although we have two totally different situations. I made them quick and dirty, but even if you do it precisely there is a high chance that also not much difference will be seen. On the one hand we are dealing with a large mass that needs to be accelerated and I made the remark that they therefore should look the same, but on the other hand we can wonder what the value of the data is if there is really no difference with the Landmark.
einsteen
QUOTE (David B. Benson+Dec 9 2007, 11:59 PM)

By the way, notice that a resistive force of

F(Z,S) = kZS

is the constant k time the momentum, ZS.  Dunno why...  sad.gif

That's indeed strange. I guess this is based on the first 3.8 seconds, how does something like this work out if you use it for the whole collapse ?

If we only look at a discrete model in vacuum, ignore stretch and all stuff then it is hard to explain. I would say that something like k0+k1*Z makes sense for a building. Could it be that ejection of mass plays a role?
shagster

Einsteen,

I plotted the crush-up collapse duration for various values of E1/m. I used ODE Toolkit and the continuous diffeq crush-up model that I posted about on p542 to obtain the values of crush-up duration vs. E1/m. This is for a tower height of 174 m (WTC7).

I'm not sure where the squiggles came from in your t vs E1/m graph. Perhaps it is artifact from a numerical solver if you were using one. I don't see it in my results.


Graph (y = collapse duration; x = E1/m):

User posted image

Larger graph:

http://i134.photobucket.com/albums/q91/sha...rushupe1m-1.jpg


Here are the results from ODE Toolkit
(E1/m in J/kg and collapse duration in seconds)

0, 5.96
1, 6.06
2.5, 6.26
4, 6.48
5, 6.66
6, 6.86
7, 7.06
7.5, 7.17
8, 7.28
8.5, 7.39
9, 7.51
10, 7.74
11, 7.97
12.5, 8.3
14, 8.66
15, 8.88
16, 9.12
17.5, 9.45
19, 9.78
20, 10.0
21, 10.22
22.5, 10.54
24, 10.85
25, 11.1


shagster
QUOTE (einsteen+Dec 3 2007, 08:00 AM)

I found something interesting, if I use the maple discrete calculation I get

E1/M,  t(s)

0, 5,96
5, 6.63
10, 7.73-0.23i
15, 8.90-1.04i
20, 10.00 - 2.33i
25, 11.07 - 4.21i

The real part then is the collapse time, which is almost the same as your values.


The crush-up solution has an imaginary component when E1/m is higher than the critical value where the collapse no longer goes to completion.

Here is a graph of the results of the ODE solver for a crush-up. This is the same as the graphs I posted on p542 but the time axis extends out to 30 seconds. Note that the solutions are oscillatory when E1/m is higher than the critical value (E1/m = 10,15,20,25). The imaginary component in your solution stems from that oscillatory behavior.

User posted image

larger graph:

http://i134.photobucket.com/albums/q91/sha...up/crushup1.jpg

It's important to not make an erroneous conclusion that the collapse duration for a crush-up can't be higher than the duration associated with the critical E1/m. The duration can be higher. The portions of the x vs t curves past the first minimum that show the oscillatory behavior aren't physically meaningful for the collapse phenomenon that is being modeled (i.e., the resistive force doesn't make the top of the building rise up into the air after it reaches a minimum height). The portions of the curves up to the first minimum in the x vs. t curve are meaningful and give the collapse duration.
shagster
As an exercise, it might be useful to try to model the first few seconds of the WTC1 collapse using the antenna drop data and the continuous crush-up model instead of crush-down. Parameters such as resistive force and stretch as a function of height could be adjusted to see how small the standard deviation could be made.
OneWhiteEye
QUOTE (einsteen+Dec 10 2007, 09:43 AM)
1) Does the data corrected for rotation represent a model in which a non-tilted block falls perfectly straight down.

2) Does the data corrected for rotation represent a tilted block falling perfectly straight down.

Exactly. If only looking at the one video, it would be easy to assune the total rotation of the upper block remains quite small. But, even in that case, what impact does it have on interpretation of data? Small angle but a massive body, while pure translation during the same interval is also small (or zero); neglecting rotation due to small angle cannot or should not give a correct result.

But, if you look at other videos, you see the antenna rotates to a substantial angle, not apparent in the one video because rotation is away from the camera. In real measurements, two points very close on the antenna appear to get about two percent closer in the first few seconds, an effect of the projection of rotation.

A rotation about an abitrary origin is equivalent to a rotation and translation about different body-centered coordinates. If all or most of the translation of a measured feature comes from rotation about another point, would it even be reasonable to expect any sort of match from a 1-D model? Particularly one that excludes rotation entirely as opposed to having a corrective term in the canonical coordinate?
David B. Benson
I'll eventually answer questions, but just now I have to state that the differential equation I have been using is only correct for constant stretch. For other stretch functions there are corrective terms to be added. I suspect (or maybe just hope) that the correction is small. We'll have to see.

As it stands, all results to date, except those for constant stretch, have to be thrown out. sad.gif
OneWhiteEye
Unfortunate. I'm sorry to hear that, but it's much better it be caught now than later.
OneWhiteEye
C447 data (raw) is roughly the frame range 800-1032. This is the C447 smear-o-gram with the curve that was digitized for that data:

User posted image
http://i15.tinypic.com/820vdz9.png

The net vertical displacement over this time is measured to be 177 pixels. First appreciable apparent vertical motion occurs around frame 895, as reflected in the numeric data.

Look at the feature which the smear-o-gram of pixel column 447 tracks, at least initially:

User posted image
http://i5.tinypic.com/733jmm8.png

It is the left edge of the dark band, marked by a green star. Where is this feature in frame 1032?

User posted image
http://i19.tinypic.com/71vukie.png

Well, the feature isn't really visible anymore (!!!) but the once-bright thing to the right still is, so it can be determined from direct inspection of these initial and final frames that the dark band as a whole has moved about 6 pixels to the left while going 179 down. A two pixel discrepancy exists between the vertical displacement in the C447 data and a more reliable observation, not too bad but still a 1% correction upwards (faster).

The discrepancy can be explained and, in the process, more insight gathered as to the nature of errors and differences between C447 and the next two columns. I explained the mechanics of the distortion in a previous post but the effect will now be examined for the real data. In a blow up of the dark band taken from frame 800, the measurement point for C447 data (green star) can be followed through its migration 6 pixels over to the right, likewise C448 and C449 (blue and red):

User posted image
http://i6.tinypic.com/8228lds.png

The dark point being measured in C447 has shifted up two pixels, so the discrepancy is fully explained, as was expected all along. Geometric considerations evident from the very beginning have finally been quantified in the most rudimentary fashion: up to 2 pixels of non-random error exists in all three datasets.

It's obvious the measurement points of the three columns migrate at the same horizontal rate but suffer artificial vertical displacement at different times, due to geometry. Column 449, in particular, is seen to get its 'boost' well before C447, whenever that is. The horizontal rate itself cannot be taken to be constant or even linearly increasing, the initial motion is not known, so it's impossible to say where these shifts occur in the data without further measurement.
OneWhiteEye
Such measurements are not that difficult and I will do them soon. I believe that these corrections should not only be applied to the data, but done first as depicted in this diagram:

User posted image
http://i12.tinypic.com/81hmr29.png

The reasons are simple. This is a systematic error that is known to exist and can be accurately quantified and corrected. It introduces unnecessary discrepancies between the three datasets. Once corrected, the three datasets will probably agree so well the +/- 1 pixel accuracy claimed will seem quite modest. I would then totally support merging C447-449 into one set; it would be the right thing to do.

Some predictions (I wrote this before DBB's revelation above but it's worth saying still):

- this correction would make a substantial difference to the results of a Bayesian analysis, not unlike perspective correction despite smaller magnitude
- much better agreement between individual runs of the column data which would, in turn, make an even stronger justification for merging the data
- the results of analyzing the merged data will agree more strongly with the results from individual columns
- I'd be surprised if there was much if any change in rank between runs of individual or merged columns
- the measure of discrimination as a dependency on quantity of points (posted by DBB) will be satisifed as the sets are merged one by one

That's a pretty specific set of predictions for someone shooting from the hip, isn't it? Audacious, considering I've never done such an analysis and barely understand it. Wouldn't it be incredible if I were right? Could I claim the $1M challenge prize from the Randi people?

I could be completely wrong. It wouldn't matter, the justification is that it's the right thing to do.

The work is not done so naturally the corrections are pending. Preliminary data such as this, though, is provisional. There are no things that can be refuted with confidence by the current data.

David B. Benson, I've got great respect for what you're trying to do with your analyses, that's why I'm working to provide you with data. I don't see anyone else (except maybe einsteen) who is or will be doing anything with it. You understand your process better than I could hope to, but I have familiarity with this data and I'm no stranger to the acquisition, reduction and analysis of data in the general sense. I just want to be sure the best possible results are obtained from data I provide, and no premature conclusions are reached.

Added thought: In light of the need to re-run everything, it seems an opportune time to bring this up, even if the issues in previous runs could be attributable to modeling error.

frater plecticus: If you happen to be driving by again, THIS is what I mean by GOOD data.
David B. Benson
Whew! Here is the first run with the correct equation, using again the pixel column 447 data:

CODE
BV-K+ZS-F-exp-pow-stretch dB= 0.0 sd= 0.120

Sef-K+ZS-F-exp-pow-stretch dB= 1.0 sd= 0.167


Notice that the standard deviations are somewhat worse, but not a lot. Above has K=0 forced. Below has parameter for ZS term forced to be zero.

CODE
BV-K+ZS-F-exp-pow-stretch dB= 0.0 sd= 0.262

Sef-K+ZS-F-exp-pow-stretch dB= 1.4 sd= 0.391


So, once again, a constant resistive force is disconfirmed in comparison to a force function of

F(Z,S) = kZS.
chris lz
Hi. New here. Don't know if this has been answered, or if this is the place to ask.

From the PM 9/11 book, pp46-7

QUOTE
"The explosives configuration manufacturing technology [to bring down those buildings]  does not exist," [Mark] Loizeaux says. "If someone were to attempt to make such charges, they would weigh thousands of pounds apiece.  You would need forklifts to bring them into the building."

The biggest commercially available charges, Loizeaux tells Popular Mechanics, are able to cut through steel that is three inches thick. The box columns at the base of the World Trade Center towers were 14 inches thick on a side. If big enough charges did exist, Loizeaux says, for each tower it could hypothetically take as long as two months for a team of up to 75 men with unfettered access to three floors to strip the fireproofing off the columns and then place and wire the charges.


I'm having a discussion on this point in another forum. The poster writes

QUOTE (->
QUOTE
"The explosives configuration manufacturing technology [to bring down those buildings]  does not exist," [Mark] Loizeaux says. "If someone were to attempt to make such charges, they would weigh thousands of pounds apiece.  You would need forklifts to bring them into the building."

The biggest commercially available charges, Loizeaux tells Popular Mechanics, are able to cut through steel that is three inches thick. The box columns at the base of the World Trade Center towers were 14 inches thick on a side. If big enough charges did exist, Loizeaux says, for each tower it could hypothetically take as long as two months for a team of up to 75 men with unfettered access to three floors to strip the fireproofing off the columns and then place and wire the charges.


I'm having a discussion on this point in another forum. The poster writes

If anyone thinks about it, the "14" on a side" explanation for why "tons of explosives" would have been needed to cut the supports in the Towers, that one would easily see that the supports, being hollow (for reasons of strength, reduced weight, and cost) would invalidate the explanation. And I don't care how poor an "expert" one is, anyone even remotely acquainted with the way steel high rises are constructed would never have come up with THAT zinger.



Comments?
David B. Benson
QUOTE (einsteen+Dec 10 2007, 06:46 AM)
That's indeed strange. I guess this is based on the first 3.8 seconds, how does something like this work out if you use it for the whole collapse ?

If we only look at a discrete model in vacuum, ignore stretch and all stuff then it is hard to explain.

I would say that something like k0+k1*Z makes sense for a building.

Could it be that ejection of mass plays a role?

Yes, just the first 3.82 seconds. I haven't attempted extrapolating to the entire collapse, so I'll just guess that it makes the collapse take somewhat longer.

However, the stretch is an important component of progressive collapse.

Naively, yes. Indeed, up until now that is what modelers have been using, modified in BLGB to include an additional term for air movement and another for concrete comminution. What makes this so puzzling, in part, is that at 3.8 seconds the collapse is proceeding at about 25 m/s. That should be plenty to comminute concrete.

I assume uniform mass distribution. About 17 floors are crushed in the 3.82 seconds. At that elevation each story massed about 6% more that at the collapse starting elevation. But I also ignore mass ejection. If that is about the same as the increase in mass, it evens out. So I currently think that mass ejection does not explain the resistive force function.
OneWhiteEye
QUOTE (chris lz+Dec 10 2007, 08:09 PM)
Hi. New here. Don't know if this has been answered, or if this is the place to ask.
From the PM 9/11 book, pp46-7
I'm having a discussion on this point in another forum. The poster writes
...
Comments?

Greetings, chris lz, welcome to the forum. Opinions, but no comments except that Loizeaux would certainly be aware of the hollow nature of the supports in the comment he made. Whether or not his observation is accurate - no comment.
David B. Benson
QUOTE (chris lz+Dec 10 2007, 01:09 PM)
... if this is the place to ask.

Certainly is!

While the box core columns were indeed hollow, the ones at the bottom levels contained an additional bar through-welded in the center, running the long dimension of the column. So the short dimension, about 14 inches, was about 9--10 inches of mild steel and the remainder air.
einsteen
QUOTE (shagster+Dec 10 2007, 02:46 PM)
Einsteen,

I plotted the crush-up collapse duration for various values of E1/m. I used ODE Toolkit and the continuous diffeq crush-up model that I posted about on p542 to obtain the values of crush-up duration vs. E1/m. This is for a tower height of 174 m (WTC7).

I'm not sure where the squiggles came from in your t vs E1/m graph. Perhaps it is artifact from a numerical solver if you were using one. I don't see it in my results.


Graph (y = collapse duration; x = E1/m):

User posted image

Larger graph:

http://i134.photobucket.com/albums/q91/sha...rushupe1m-1.jpg


Here are the results from ODE Toolkit
(E1/m in J/kg and collapse duration in seconds)

0, 5.96
1, 6.06
2.5, 6.26
4, 6.48
5, 6.66
6, 6.86
7, 7.06
7.5, 7.17
8, 7.28
8.5, 7.39
9, 7.51
10, 7.74
11, 7.97
12.5, 8.3
14, 8.66
15, 8.88
16, 9.12
17.5, 9.45
19, 9.78
20, 10.0
21, 10.22
22.5, 10.54
24, 10.85
25, 11.1

As far as I know it is only a summation and on a larger scale it looks smooth, it must have an explanation, I'll check tomorrow.
David B. Benson
QUOTE (einsteen+Dec 10 2007, 02:43 AM)
If I understand everything well, it is your[OneWhiteEye's] data from the antenna measurements.

The perspective effect has been corrected, but the rotation is still a problem.

1) Does the data corrected for rotation represent a model in which a non-tilted block falls perfectly straight down.

2) Does the data corrected for rotation represent a tilted block falling perfectly straight down.

... on the other hand we can wonder what the value of the data is if there is really no difference with the Landmark.

Yes.

Yes, although the effect will be minor for WTC 1 with a tilt of only about 11 degrees of arc.

Yes, close enough.

Yes, close enough for WTC 1.

But there is a difference in the data between WTC 1 and Landmark. In either case, I am after a model which is fast to run and can be used to aid in calibrating the ASI (and others) global progressive collapse software.
David B. Benson
QUOTE (OneWhiteEye+Dec 9 2007, 05:41 PM)
What are your thoughts on error due to rotation of the antenna?

And what is the stance of BV equations wrt rotational energy in the 1-D model?

I mean specifically as it relates to making measurements which have both rotational and translational components.

This is a perspective error which needs to be compensated for. The equations are easy to derive, but we need some estimate of the magnitude of the tilt at each time. I have some data measured by NEU-FONZE and I plan to use this as a first approximation to make this (small) correction.

The crush-down equations are one-dimensional only.

Just make the best possible measurements. If the measurements are taken both on the antenna tower and also towards the left (east) end of the roofline, it should be possible to use that to calculate the tilt away from the camera. (The small movement to the left can be ignored.)
OneWhiteEye
Thanks for your response. I'll refrain from making too many comments until you've had a chance to digest some of the later remarks.

QUOTE (David B. Benson+Dec 10 2007, 09:59 PM)
This is a perspective error which needs to be compensated for.  The equations are easy to derive, but we need some estimate of the magnitude of the tilt at each time.  I have some data measured by NEU-FONZE and I plan to use this as a first approximation to make this (small) correction.

Sounds good.

QUOTE
The crush-down equations are one-dimensional only.


Yes, I understand, but even a 1-D equation of motion can account for effects introduced by other degrees of freedom. An example would be a fictitious force such as centrifugal force when doing calculation along a single axis of a 2+ dimensional rotating coordinate frame. Or, if your bend is more Lagrangian or Hamiltonian, an additional energy term expressed as a function of the single canonical coordinate.

I haven't looked at the papers in a while, but I don't remember seeing such a term, so I thought I'd ask.

I honestly don't know whether or not it is a significant correction term, but it seems like a significant action in the early displacement data I took. Empirically, it would seem that it could only be ignored if it were shown to be truly decoupled.

QUOTE (->
QUOTE
The crush-down equations are one-dimensional only.


Yes, I understand, but even a 1-D equation of motion can account for effects introduced by other degrees of freedom. An example would be a fictitious force such as centrifugal force when doing calculation along a single axis of a 2+ dimensional rotating coordinate frame. Or, if your bend is more Lagrangian or Hamiltonian, an additional energy term expressed as a function of the single canonical coordinate.

I haven't looked at the papers in a while, but I don't remember seeing such a term, so I thought I'd ask.

I honestly don't know whether or not it is a significant correction term, but it seems like a significant action in the early displacement data I took. Empirically, it would seem that it could only be ignored if it were shown to be truly decoupled.

Just make the best possible measurements.  If the measurements are taken both on the antenna tower and also towards the left (east) end of the roofline, it should be possible to use that to calculate the tilt away from the camera.

Yes. As I've pointed out, it's quite likely that it can be calculated from the other manual data I've posted, based on the difference over time of positions of points on the antenna together with the camera angle and the known initial antenna tilt angle of zero. The difference gradually reaches an easily observable 2%. Working with the differentials strictly determines the tilt angle over time, to the measurement accuracy.

QUOTE
(The small movement to the left can be ignored.)

In general, yes, from the view of deviation from vertical. The Y component of an (x,y) measurement will not care about the X. But, as I've demonstrated above, the small movement makes a significant difference in the C447-449 data, a one dimensional measurement that does NOT directly account for 2D motion.
David B. Benson
Pixel column 449 data with corrected equation:

CODE
BV-ZS-F-exp-pow-stretch   dB= 0.0 sd= 0.118

Sef-ZS-F-exp-pow-stretch  dB= 0.4 sd= 0.148
BV-K+Z-F-exp-pow-stretch  dB= 1.8 sd= 0.201
Sef-K+Z-F-exp-pow-stretch dB= 6.5 sd= 0.328


The last two have force functions of the form einsteen suggested,

F(Z) = k0 + k1*Z

and in both cases neither k0 nor k1 is zero. But only the last is substantially disconfirmed.
David B. Benson
QUOTE (OneWhiteEye+Dec 10 2007, 03:43 PM)
I haven't looked at the papers in a while, but I don't remember seeing such a term, so I thought I'd ask.

... the known initial antenna tilt angle of zero.

The difference gradually reaches an easily observable 2%.

Working with the differentials strictly determines the tilt angle over time, to the measurement accuracy.

There isn't one in the equations as they are written. I don't think it an important correction term for such a massive structure.

The tilt angle at t0 is about 1.1 degrees of arc, according to NEU-FONZE.

Sorry, what difference reaches 2%?

Can you amplify on this remark. I'm not following.

========================================
More later...
OneWhiteEye
QUOTE (David B. Benson+Dec 10 2007, 11:19 PM)
There isn't one in the equations as they are written.  I don't think it an important correction term for such a massive structure.

Energy scales with mass so the importance of such a term is independent of the mass. Big mass, big moment of inertia, big rotational kinetic energy. The tipping begins first and continues for some time... it represents less and less total energy rapidly as the descent progresses, but I'm not so confident it can be totally ignored in the first few seconds.

This, obviously, is a statement towards the accuracy of the theory as applied to this specific instance, not an observation about the data analysis.

QUOTE
The tilt angle at [I]t0p/I] is about 1.1 degrees of arc, according to NEU-FONZE.

Thanks for that correction. I do need to relate that to my t0, though, because I see the slow creep to the side. At my t0, I don't think the tilt is as much as 1.1 degrees, but I can't be sure without further observation. What matters for the solutions, though, is your t0, so this isn't a big deal.

QUOTE (->
QUOTE
The tilt angle at [I]t0p/I] is about 1.1 degrees of arc, according to NEU-FONZE.

Thanks for that correction. I do need to relate that to my t0, though, because I see the slow creep to the side. At my t0, I don't think the tilt is as much as 1.1 degrees, but I can't be sure without further observation. What matters for the solutions, though, is your t0, so this isn't a big deal.

Sorry, what difference reaches 2%?

The difference in position values between the dark band and the average antenna dish position, in the manual data I posted some time back. I could see the difference in the curves easily by visual inspection; when overlaying the graphs it was very obvious. I've spoke of this several times.

These points are physically close together on the antenna. Given the accuracy of the measurement (quite good), there is no way the disagreement between the two displacements can be explained as measurement error. A physical explanation is necessary, and crushing of the few feet of antenna between them is not it.

The real explanation is that the antenna tips away during this time, making the projected distance between the two features on the image plane shorter. The effect is the same as if the camera theta angle were to gradually increase. Think dot product of the antenna long axis vector and any vector on the plane normal to the optical axis.

QUOTE
Can you amplify on this remark.  I'm not following.

The two measured points on the antenna start at a fixed apparent distance. As collapse progresses, both of these points displace downward, but at different rates. The apparent decrease in distance can be used to calculate the angle required to produce the difference. The differences from each frame can be used to map the antenna tilt over time.
David B. Benson
Here are the same four hypotheses using the C447 data:

CODE
BV-ZS-F-exp-pow-stretch   dB= 0.0 sd= 0.133

Sef-ZS-F-exp-pow-stretch  dB= 0.3 sd= 0.153
BV-K+Z-F-exp-pow-stretch  dB= 2.6 sd= 0.246
Sef-K+Z-F-exp-pow-stretch dB= 6.0 sd= 0.343


Only the last hypothesis can be excluded as (twice over) substantially disconfirmed.
adoucette
QUOTE (chris lz+Dec 10 2007, 03:09 PM)
Hi. New here. Don't know if this has been answered, or if this is the place to ask.

From the PM 9/11 book, pp46-7

QUOTE
"The explosives configuration manufacturing technology [to bring down those buildings]  does not exist," [Mark] Loizeaux says. "If someone were to attempt to make such charges, they would weigh thousands of pounds apiece.  You would need forklifts to bring them into the building."

The biggest commercially available charges, Loizeaux tells Popular Mechanics, are able to cut through steel that is three inches thick. The box columns at the base of the World Trade Center towers were 14 inches thick on a side. If big enough charges did exist, Loizeaux says, for each tower it could hypothetically take as long as two months for a team of up to 75 men with unfettered access to three floors to strip the fireproofing off the columns and then place and wire the charges.



Comments?

It's kind of a strawman argument in that both towers failed at the FIRE/IMPACT floors.

He DOES have a point about the difficulty of unfettered access.

Also, since the collapse began in both towers with an outside wall on a fire floor bowing in, there is no explosive that is capable of a slow sustained force.

Arthur
David B. Benson
QUOTE (OneWhiteEye+Dec 10 2007, 04:54 PM)
... big rotational kinetic energy.

What matters for the solutions, though, is your t0, so this isn't a big deal.

I've spoke of this several times.

The effect is the same as if the camera theta angle were to gradually increase.

The differences from each frame can be used to map the antenna tilt over time.

Yes, it is an energy sink for the first second. I'll think more on this.

The antenna tower would begin to tilt to the south no later than the observed bowing-in of the south wall, 20 minutes before collapse. So for your begin time, there is already some small tilt to the south.

Yes, but it took until now to sink in. smile.gif This is not a high bandwidth communication medium.

Err, decrease? No, you have it right.

That would be impressive!
OneWhiteEye
QUOTE (David B. Benson+Dec 11 2007, 12:26 AM)
The antenna tower would begin to tilt to the south no later than the observed bowing-in of the south wall, 20 minutes before collapse.  So for your begin time, there is already some small tilt to the south.

I really wish the camera had not been disturbed just prior to collapse. I must remember how fortunate we are the shake stopped when it did.

QUOTE
Yes, but it took until now to sink in.  smile.gif  This is not a high bandwidth communication medium.

True, it's not. Sometimes my explanations are not so good, either. I'm glad we connected on that. The measurements are already very accurate, and will get more so. So good that every measurable point will need individual perspective correction for the dynamics of the rigid body. Sort of a catch 22, but not really. You see we can derive motion in a 3rd dimension from just one 2D video... My feeling is that other videos will add little or no information content to the descent tracking, but rather a fair bit to the dynamic perspective corrections required for features on the upper block in motion.

QUOTE (->
QUOTE
Yes, but it took until now to sink in.  smile.gif  This is not a high bandwidth communication medium.

True, it's not. Sometimes my explanations are not so good, either. I'm glad we connected on that. The measurements are already very accurate, and will get more so. So good that every measurable point will need individual perspective correction for the dynamics of the rigid body. Sort of a catch 22, but not really. You see we can derive motion in a 3rd dimension from just one 2D video... My feeling is that other videos will add little or no information content to the descent tracking, but rather a fair bit to the dynamic perspective corrections required for features on the upper block in motion.

That would be impressive!

It's on the list. And it will work.
David B. Benson
The following C447 run temds to confirm the resistive force is very close to

F(Z,S) = kZS

for constant parameter k:

CODE
BV-linZS-F-exp-pow-stretch dB= 0.0 sd= 0.120

BV-ZS-F-exp-pow-stretch   dB= 0.0 sd= 0.120
Sef-linZS-F-exp-pow-stretch dB= 0.6 sd= 0.151


The linZS hpotheses add a small angle so that, in effect, the parameter k grows slightly with (Z-Z0). But this is only about a half a degree of arc and doesn't fair much better.
chris lz
QUOTE (adoucette+Dec 10 2007, 07:08 PM)


It's kind of a strawman argument in that both towers failed at the FIRE/IMPACT floors.

He DOES have a point about the difficulty of unfettered access.

Also, since the collapse began in both towers with an outside wall on a fire floor bowing in, there is no explosive that is capable of a slow sustained force.

Arthur

Thanks everyone for your input.

Arthur, does it seem to you that Loizeaux's comments are so off base as to be obvious disinformation? This is the contention of the poster I quoted above. I'm trying to get a feel for the accuracy of Loizeaux's assertion. Is the fact it's actually 9-10 inches at that point the deciding factor?


(PS, it's "limo" from ad.com. Hope you'll make a re-appearance there someday. Regards, Chris.)
adoucette
QUOTE (chris lz+Dec 10 2007, 11:09 PM)
Thanks everyone for your input.

Arthur, does it seem to you that Loizeaux's comments are so off base as to be obvious disinformation? This is the contention of the poster I quoted above. I'm trying to get a feel for the accuracy of Loizeaux's assertion. Is the fact it's actually 9-10 inches at that point the deciding factor?


(PS, it's "limo" from ad.com. Hope you'll make a re-appearance there someday. Regards, Chris.)

Hi Chris,

As to your question about Loizeaux. No, I don't think its disinfo, I just think he was responding to many of the CT'ers who claim that there were explosions at the base of the tower and his comments make it pretty clear that it would have been a FORMIDABLE task to take out sufficient columns at the ground level to affect the towers.

Its interesting to note that in the bombing in 93 not a single column was severed and since the bomb was not under the tower itself, those columns were FAR smaller than these massive core columns.

Arthur

PS, I was thinking about AD just the other day. I'm sure that one of these days I'll come back.



einsteen
Shagster,

Here is your continuous result together with the discrete model:

User posted image
http://i4.tinypic.com/6lv2wyu.gif

They don't differ really much, I don't think the artifacts are numerical errors, if we zoom in at the point of maximum E1/M for a total collapse we get

User posted image
http://i16.tinypic.com/8e6la2u.gif

But it is still hard to explain precisely. The first edge is the point where the collapse is barely complete. When E1/M is higher the collapse will not be complete and the same kind of function is expected but then a story before the 47th, but if I create a larger graphic there are even curves going back in time a little bit. I guess it has to do with the discrete model, if it is a pure numerical error it should appear also in the beginning because the collapse time summarizes over all stories. But since the collapse was complete, we don't have to worry about that...
David B. Benson
As above, using C449 data:

CODE
BV-linZS-F-exp-pow-stretch dB= 0.0 sd= 0.110

BV-ZS-F-exp-pow-stretch   dB= 0.0 sd= 0.112
Sef-linZS-F-exp-pow-stretch dB= 0.5 sd= 0.145


The up-angles vary between one and three degrees of arc, in both this run and also the previous one (where I slipped a decimal poiint).

These two runs suggest that a resistive force function of the form

F(Z,S) = k*Z*S

is only approximately correct.
David B. Benson
However, this run using C449 suggests that the assumption of a constant is ever so slightly better:

CODE
BV-ZS-F-exp-pow-stretch   dB= 0.0 sd= 0.103

BV-linZS-F-exp-pow-stretch dB= 0.1 sd= 0.109
BV-K+Z-F-exp-pow-stretch  dB= 2.3 sd= 0.200
David B. Benson
Using C449, have hypotheses which attempt to capture, directly, concrete comminution:

CODE
BV-ZS-F-exp-pow-stretch   dB= 0.0 sd= 0.103

BV-K+Z+ZSS-F-exp-pow-stretch dB= 2.1 sd= 0.192
BV-K+Z+SS-F-exp-pow-stretch dB= 2.8 sd= 0.230
Sef-K+Z+SS-F-exp-pow-stretch dB= 4.6 sd= 0.281
Sef-K+Z+ZSS-F-exp-pow-stretch dB= 5.1 sd= 0.287


K+Z+SS refers to a force function of the form

k0 + k1*Z + k2*S*S

while K+Z+ZSS refers to a force function of the form

k0 + k1*Z + k2(Z-Z0)S*S,

this assuming continual re-crushing of already crushed materials occured.

Again we see that the B&V form is ok, but the Seffen form is substantially disconfirmed.
shagster
QUOTE (einsteen+Dec 11 2007, 05:21 PM)

But it is still hard to explain precisely. The first edge is the point where the collapse is barely complete. When E1/M is higher the collapse will not be complete and the same kind of function is expected but then a story before the 47th, but if I create a larger graphic there are even curves going back in time a little bit. I guess it has to do with the discrete model, if it is a pure numerical error it should appear also in the beginning because the collapse time summarizes over all stories. But since the collapse was complete, we don't have to worry about that...


I tried using my discrete algebraic crush-up model for determining collapse duration vs. E1/m and see artifacts similar to what you describe.

I'll try to post some details later if I have time.
shagster
Einsteen,

I don't know all the details of your discrete model. But if you were using 47 stories in the model, you can try using more stories, such as 470 but with the same building height, and see what happens to that scalloped effect. The higher the number of stories, the more the model should approach a continuous type of model.

einsteen
Indeed Shagster, for each point the difference could be made arbitrarily small if we choose sufficient points. I'm glad you also had that error in your discrete setup.

What I did was calculating the time to go from i -> i+1 with the formula that follows from conservation of energy, the E1 was taken uniform, it could also be due to a peak force, i think this gives a larger collapse time.

for constant E1 I used

t_i=2h/(v_i+V_{i+1})

which is also similar to [V_{i+1}-v_i]/averageAcceleration

It could also be done with g, but then just before and after a floor
reaches ground zero the velocity changes stepwise. The result should be
about the same.


OneWhiteEye
QUOTE (David B. Benson+Dec 11 2007, 09:45 PM)
However, this run using C449 suggests that the assumption of a constant is ever so slightly better:

Would you mind summarizing your current results with all the adjustments, tweaks, and corrections?
David B. Benson
Same five hypotheses using C447 data:

CODE
BV-ZS-F-exp-pow-stretch   dB= 0.0 sd= 0.118

Sef-K+Z+ZSS-F-exp-pow-stretch dB= 1.3 sd= 0.179
BV-K+Z+SS-F-exp-pow-stretch dB= 2.7 sd= 0.223
BV-K+Z+ZSS-F-exp-pow-stretch dB= 3.3 sd= 0.236
Sef-K+Z+SS-F-exp-pow-stretch dB= 6.1 sd= 0.310


Here only the last is substantially disconfirmed.

Summary coming up shortly! smile.gif
David B. Benson
Summary (2007 Dec 12)

The original, simple hypothesis of a constant force with
constant stretch is decisively disconfirmed in favor of
X-ZS-F-exp-pow stretch in which the force function is
of the form

kZS

with k constant (or very slightly increasing with Z), and
X is either BV or Sef. This holds for
other resistive force function forms with a constant stretch.

Alternatives which do almost as well and so are not (yet)
disconfirmed by the naive Bayes factor method include
BV-K+Z-F-exp-pow-stretch
BV-K+Z+SS-F-exp-pow-stretch
BV-K+Z+ZSS-F-exp-pow-stretch
Sef-K+Z+ZSS-F-exp-pow-stretch
.

However, when either the BV form or the Sef form is
disconfirmed, this throws a shadow on the other member of the
pair.

Tentative Conclusion
The resistive force function of the form

kZS

summarizes a wide variety of dissipative paths by which energy
was consumed. Rather than attempting to discover the form and
amount of these paths, along the lines of

k0 + k1*Z + k2*S*S + ...

we instead have a simple summary form, with only one parameter
to estimate. However, there is no particular reason to suspect
that this summary form describes what happened beyond the first
3.82 seconds of the progressive collapse.
einsteen
It would still be interesting to see what the range of possible results would be. If the function provided by OneWhiteEye is called f(t) then an error margin leads to two functions f_min(t) and f_max(t), for which

f_min(t) <= f(t) <= f_max(t) for t in [0s,3.82s]

the correction for tilt leads to g_min(t) and g_max(t)
the correction for perspective/etc leads to h_min(t) and h_max(t)
and so on. Shouldn't an error margin be part of the result.

If L_min(t) and L_max(t) are the Landmark functions then it is possible that the intersection of those ranges lead to a couple of possible functions, let's say
p1(t),p2(t), etc, then if there is no physical reason to pick p_DBB(t) then why would that be the right one. And I don't get the single column data. Maybe I will understand it if your final version is available...
David B. Benson
QUOTE (einsteen+Dec 12 2007, 01:53 PM)
Shouldn't an error margin be part of the result.

And I don't get the single column data.

The error is expressed as the weighted standard deviation between the measured data and the computed data.

'Single column data'? I don't follow what you are referring to...
David B. Benson
Using C447 data, the 'vertical avalanche' equation for the force is tested:

CODE
BV-ZS-F-exp-pow-stretch   dB= 0.0 sd= 0.133

BV-ZSS-F-exp-pow-stretch  dB=29.4 sd= 0.688
Sef-ZSS-F-exp-pow-stretch dB=44.0 sd= 0.856


But then I noticed a problem (parametr estimation tecchnique may find a poor local minimum), so I changed the starting values, and

CODE
BV-ZSS-F-exp-pow-stretch  dB= 0.0 sd= 0.109

BV-ZS-F-exp-pow-stretch   dB= 0.2 sd= 0.120
Sef-ZSS-F-exp-pow-stretch dB=65.1 sd= 0.856


so the resisting force function form

kZSS

is the winner, with the B&V equation. (Possibly the problem with the Seffen equation is not starting well, dunno yet.)
David B. Benson
Vertical avalanches with C449 data:

CODE
BV-ZSS-F-exp-pow-stretch  dB= 0.0 sd= 0.069

Sef-ZSS-F-exp-pow-stretch dB= 0.4 sd= 0.084
BV-ZS-F-exp-pow-stretch   dB= 0.7 sd= 0.098


Wowser! wink.gif smile.gif
OneWhiteEye
QUOTE (David B. Benson+Dec 12 2007, 07:12 PM)
Summary (2007 Dec 12)
...

Thanks. It's good to get an overview of what's going on.
OneWhiteEye
QUOTE (David B. Benson+Dec 12 2007, 09:22 PM)
(to einsteen)'Single column data'?  I don't follow what you are referring to...

I think einsteen is wondering why there are so many runs now using only a single column of data. Originally, when I asked you:

QUOTE
Could you run each of C447-449 independently and produce three tables like that?


your response was:

QUOTE (->
QUOTE
Could you run each of C447-449 independently and produce three tables like that?


your response was:

I could, but am unlikely to do so.


What made you change your mind?
OneWhiteEye
QUOTE (David B. Benson+Dec 12 2007, 09:22 PM)
The error is expressed as the weighted standard deviation between the measured data and the computed data.

einsteen refers to propagation of error throughout the analysis. Perhaps you are addressing this and I'm missing it; perhaps, through some operation I don't understand, it doesn't apply in this case.

In that vein...

Given a dataset which has an associated error of +/- x units, how can any method distinguish between the infinite number of curves that lie within that band over that interval? Simply being close doesn't always count.

Hypothetical: suppose a data recorder has been rated as accurate to +/- 1 unit. It is fed a calibration signal and the captured data is compared to the known input. If the calibration signal is a straight line defined by, say, f(x) = x + 1, then that input, along with a tolerance band of +/- 1, looks like this:

User posted image
http://i6.tinypic.com/8676ogp.png

Say the recorder produces the following output:

User posted image
http://i15.tinypic.com/6ldofvo.png

which, when compared with the input and acceptable error range, is seen to be within the rated error.

User posted image
http://i3.tinypic.com/6y4ej4h.png

In this scenario, the input is known and the measurement is judged against an error band on the known value. This situation is somewhat reversed from what we have, but is quite instructive nevertheless.
OneWhiteEye
...because I'm going to take away the calibration signal. Now, given an arbitrary signal, we no longer have the benefit of a calibration curve with which to transform away the systematic error of the recorder, we just have to trust it to be accurate within +/- 1 pixel (oops, did I say pixel? I meant unit)

Incidentally, the curve I used to generate the 'recorder output' was

0.75*exp(0.05*x)*x - 0.025*x^2 + 1.5

Hardly a straight line, but well within a straight line +/- 1 unit.

Please stay with me.
OneWhiteEye
If our hypothetical recorder just happens to be used to measure a signal that is truly a straight line, we'll get the output shown above. Given the recorder is known to be accurate to +/- 1, but only knowing that, we construct a graph showing the output with the error band:

User posted image
http://i16.tinypic.com/834zokj.png

Say you want to use the Bayesian method to distinguish among competing hypotheses H1 and H2 offered to explain the nature of the signal captured:

H1: f(x) = Ax + B
H2: f(x) = A*exp(Bx)x + Bx^2 + C

Uhh, which one wins?
OneWhiteEye
The latter, I'd hope, since it would be absurd to conclude the values represent a straight line, given the input data.

Now think of that error band as noise, but you have many, many points. Apply the Bayesian methods again; what do you get? I'd wager you get the same result, with discrimination according to how bad the noise is versus the number of points. But, no matter how many points you add, if they're randomly distributed around THAT curve, you will not select the correct hypothesis, f = Ax + B.

On the other hand, if you had even a few points from a thousand recorders, each with the same error band, but random variations in their skew, I'd bet Bayes selects the straight line.

Let's have a look at all the curves together, one more time:

User posted image
actual and recorded with bands

Do you see what I'm getting at, Dr. Benson?
OneWhiteEye
In case the answer is no, please understand that what I've given you is like that hypothetical data recording. I know -

- there is error up to 2 pixels
- it is not necessarily small compared to the signal
- it is not random

There is zero noise in that data, for all practical purposes. I don't mean a little, I mean none. Drift, yes. Some of it from my eye/hand coordination, but not much. This data is already smoothed by virtue of the process and my eyes are NOT 2 pixels off.

A plot of the C447-449 data:

User posted image
http://i11.tinypic.com/8evh1cl.png

A close-up of the latter portion:
User posted image
http://i13.tinypic.com/7y3wp69.png

The first clue is that the curves differ by as much as almost two pixels... does that even make any sense for adjacent columns? Well, yes it does.

I don't know how much this affects your results, but it should be of concern until it's resolved. IMO.
OneWhiteEye
Correction:

H2: f(x) = Aexp(Bx)x + Cx^2 + D
David B. Benson
I'll start replying to questions in a bit, but just now I post the vertical avalanche hypotheses using the C448 data:

CODE
BV-ZSS-F-exp-pow-stretch  dB= 0.0 sd= 0.123

Sef-ZSS-F-exp-pow-stretch dB= 0.1 sd= 0.133
BV-ZS-F-exp-pow-stretch   dB= 0.3 sd= 0.146


and later I'll rerun using the C447 data now that I have good starting values for these hypotheses. I am rather confident that the vertical avalanche hypotheses represent a highly likely (and simple) explanation for the progressive collapse. More on this later.
David B. Benson
Here is C447 with decent starting estimates for the parameters:

CODE
Sef-ZSS-F-exp-pow-stretch dB= 0.0 sd= 0.098

BV-ZSS-F-exp-pow-stretch  dB= 0.3 sd= 0.110
BV-ZS-F-exp-pow-stretch   dB= 0.5 sd= 0.120


So, vertical avalanches are the best for all three sets of data. wink.gif smile.gif
David B. Benson
QUOTE (OneWhiteEye+Dec 12 2007, 10:11 PM)
What made you change your mind?

At first? Just to satisfy your curiosity.

I learned something about proper data exploration from that.

You were right and I was wrong.
David B. Benson
QUOTE (OneWhiteEye+Dec 13 2007, 12:10 AM)
Say you want to use the Bayesian method to distinguish among competing hypotheses H1 and H2 offered to explain the nature of the signal captured:

H1: f(x) = Ax + B
H2: f(x) = A*exp(Bx)x + Bx^2 + C

Uhh, which one wins?

To apply the naive Bayes factor method, there has to be some estimate of the standard deviation of the random errors. I'll use your +/1 one unit to mean a standard deviation of 1.0.

With that, in the situation you describe, neither is substantially better than the other at describing the signal. It is buried too far down in the noise for either hypothesis to be better than the other.

=========================
In similar cases, I invoke the principal of parsimony, Ockham's Razor. By that criterion, H1 is to be preferred.
David B. Benson
QUOTE (OneWhiteEye+Dec 13 2007, 12:16 AM)
Do you see what I'm getting at, Dr. Benson?

If the issue is systematic error in the recording device, then the naive Bayes factor method will not be adequate to compare two hypotheses regarding the signal (assuming poor signal-to-noise ratio).

If the nature of the systematic error is known, it can usually be compensated for.
David B. Benson
QUOTE (OneWhiteEye+Dec 13 2007, 01:06 AM)
- there is error up to 2 pixels
- it is not necessarily small compared to the signal
- it is not random

Drift, yes.

I don't know how much this affects your results, but it should be of concern until it's resolved. IMO.

2 pixels out of 170 or so? Seems small to me. How do you know it is not random?

Explain your notion of drift further, please.

My opinion as well.

As for einsteen's original question, another way to consider the degree of consistency of the results is to compare parameter values for the three different pixel columns of data. I'll just give the parameter for the resisting force.

Sef-ZSS-F-exp-pow-stretch
C447: 3.3424
C448: 3.2569
C449: 3.1709

BV-ZSS-F-exp-pow-stretch
C447: 3.3795
C448: 3.3262
C449: 3.1088

To me, with up to 9% differences, it seems that the differences in the three data sets ought to be understood and removed. If the differences were all below say 5% I wouldn't bother.
David B. Benson
Then again, maybe it does not matter. I just finished running Sef-ZSS-F-exp-pow-stretch with the artificial constraint that the acceleration must always be positive (downwards). This resulted in the resistance force parameter being only 0.9942, with changes in the stretch shape parameters compensating enough so that the standard deviation remained a respectable 0.109.
OneWhiteEye
QUOTE (David B. Benson+Dec 13 2007, 09:52 PM)
You were right and I was wrong.

Thanks for saying that. Such a statement is an uncommon occurrence in message boards. It speaks volumes for your integrity.
OneWhiteEye
QUOTE (David B. Benson+Dec 13 2007, 10:26 PM)
To apply the naive Bayes factor method, there has to be some estimate of the standard deviation of the random errors.  I'll use your +/1 one unit to mean a standard deviation of 1.0.

There's something important here, but I'm going to bag it for later.

QUOTE
With that, in the situation you describe, neither is substantially better than the other at describing the signal.  It is buried too far down in the noise for either hypothesis to be better than the other.

Fabulously good answer.

QUOTE (->
QUOTE
With that, in the situation you describe, neither is substantially better than the other at describing the signal.  It is buried too far down in the noise for either hypothesis to be better than the other.

Fabulously good answer.

In similar cases, I invoke the principal of parsimony, Ockham's Razor.  By that criterion, H1 is to be preferred.

As well, another good answer.
OneWhiteEye
QUOTE (David B. Benson+Dec 13 2007, 11:17 PM)
2 pixels out of 170 or so?  Seems small to me.

It depends on when this 'drift' occurs. Here's a chart of the ratio of 2 pixels' error to the signal for the last 3.8 seconds of columns 447-449:

User posted image
http://i15.tinypic.com/82m2fis.png

It can't be like that, but that's the worst-case boundary over time. Without more precise quantification, it's impossible to rule out the potential for (say) 5 - 10% error over a 1-2 second span.

QUOTE
How do you know it is not random?

Because I know where it comes from, how it got there. Executive summary:

When you're plotting a smear that represents ONE pixel's width - the ONLY way you'll get a correct result is if the feature you're looking at STAYS at that same horizontal pixel location. If it moves laterally, then you are looking at something else over time. The dark band MOVES to the left 6 pixels by the end of the dataset, probably a lot of it early on. The band is shaped like an inverted 'U' so that curve is imposed on the real curve. And it is a curve, not a random deviation from true. Like a drift.

In recent posts I explained this in excruciating detail, probably too much. But I could not characterize it any better, so if you need more detail, I refer you to this post for an overview of the problem and this post for the examination which arrives at an estimate of 2 pixels' error.

QUOTE (->
QUOTE
How do you know it is not random?

Because I know where it comes from, how it got there. Executive summary:

When you're plotting a smear that represents ONE pixel's width - the ONLY way you'll get a correct result is if the feature you're looking at STAYS at that same horizontal pixel location. If it moves laterally, then you are looking at something else over time. The dark band MOVES to the left 6 pixels by the end of the dataset, probably a lot of it early on. The band is shaped like an inverted 'U' so that curve is imposed on the real curve. And it is a curve, not a random deviation from true. Like a drift.

In recent posts I explained this in excruciating detail, probably too much. But I could not characterize it any better, so if you need more detail, I refer you to this post for an overview of the problem and this post for the examination which arrives at an estimate of 2 pixels' error.

Explain your notion of drift further, please.

Slowly varying deviations. Practically speaking, very slow noise - one or two 'zero crossings' if any in a dataset. The way a car travels in a lane: the deviations from centerline plotted over a few miles might look like noise, but over a hundred meters - it's drift. Just what you might think it is.

OneWhiteEye
You know, I was terribly green at all this when I posted that data. I didn't know all I know now and I hadn't examined this video frame-by-frame many hundreds of times or blown up pixels to an inch across. Smear-o-grams are soooo easy. They're fine when:

P1) feature motion is collinear with the pixel slice (the 1D axis)
-OR-
P2) the feature is a thin STRAIGHT boundary, orthogonal to the slice and longer than any DRIFT off-axis
-OR-
P3) extreme accuracy is not required (quick-look curves, rough comparisons)

Either of the first two cases allow for accurate representation of motion, the last simply doesn't require that much accuracy. The C447-449 data, sadly, fails all three.

R1) the antenna moves left as it drops
R2) the band is not even close to a horizontal line
R3) even two pixels of deviation mixed up between three datasets causes significant rearrangement of rank in the Bayesian analysis

As to R3, the error almost assuredly causes the jambalaya effect in the results. It's about the only thing that could! And it should! Your analysis is very sensitive - it needs to be. The transcription is probably accurate to far better than a pixel. But the transcriptions are of curves that are already distorted. Like perspective in nature, but from artifact.

QUOTE
If the issue is systematic error in the recording device, then the naive Bayes factor method will not be adequate to compare two hypotheses regarding the signal (assuming poor signal-to-noise ratio).

Makes perfect sense, I think you can see that's been my take since we started discussing this. All I need to do is convince you the error is systematic, find out how large it is, then...

QUOTE (->
QUOTE
If the issue is systematic error in the recording device, then the naive Bayes factor method will not be adequate to compare two hypotheses regarding the signal (assuming poor signal-to-noise ratio).

Makes perfect sense, I think you can see that's been my take since we started discussing this. All I need to do is convince you the error is systematic, find out how large it is, then...

If the nature of the systematic error is known, it can usually be compensated for.

Indeed. I thought it wouldn't be too hard to characterize the deviation over time for a correction, but I no longer feel that way for a variety of reasons. Let's hash out the matters above, then discuss this. Honestly, had I spent as much time working towards automation as I had explaining why the C447-449 set is questionable, you'd be able to run the best data already.

QUOTE
I'll use your +/1 one unit to mean a standard deviation of 1.0.

I'm still not ready to deal with that seemingly simple statement. It's really the crux of the matter but I'm talked out right now. ('thank god!' says spook 1, 'fat chance' says spook 2)
OneWhiteEye
When you try to draw a line with a straightedge and a ball-point pen, the pen must be held at a constant angle or the tip DRIFTS and you don't get a straight line. Little kids in pre-school experience this! This is like the error in C447-449! Your analysis does not tolerate systematic distortion, even small amounts for some hypotheses, nor should it. OBJECTS IN MIRROR ARE CLOSER THAN THEY APPEAR!

Soon enough, you'll have 114 points for each feature over the same 3.82 seconds, 10x the position resolution, and probably only marginally greater standard deviation than placing points by hand. Full 2D tracking, ONLY perspective correction needed. It will look a bit noisy but, as we know, that's OK. Stochastic methods prefer stochastic input and lots of it.

Laser precision, in the final analysis, is my prediction. Your rankings may look a bit different than they do when tested against the clean, but slightly warped, datasets of columns 447 - 449.

('told ya' says spook 2)
adoucette
The National Construction Safety Team (NCST) Advisory Committee (Committee), National Institute of Standards and Technology (NIST), will meet via teleconference Tuesday, December 18, 2007, from 1 p.m. to 3 p.m. The meeting will be audio webcast so that the public may listen to the meeting as it takes place.

http://wtc.nist.gov/media/NCSTACmeetingDec18_2007.htm

Webcast URL:
http://origin.eastbaymedia.com/~nist/asx/nist-wtc-121807.asx


The primary purpose of this meeting is for the NCST Advisory Committee to
discuss its annual report to the Congress and for NIST to update the Committee
on the status of the investigation of World Trade Center 7. The meeting will
be conducted via teleconference with a live audio webcast. The final agenda
will be posted on the NIST Web site at http://www.nist.gov/ncst.
Individuals and representatives of organizations who would like to offer
comments and suggestions related to items on the Committee’s agenda for
this meeting, are invited to request a place on the agenda. Approximately
one-half hour will be reserved for public comments, and speaking times will be
assigned on a first-come, first-served basis. The amount of time per speaker
will be determined by the number of requests received, but is likely to be 5
minutes each. Questions from the public will not be considered during this
period. Speakers who wish to expand upon their oral statements, those who
had wished to speak but could not be accommodated on the agenda, and those
who were unable to attend in person are invited to submit written statements to
the National Construction Safety Team Advisory Committee, National Institute
of Standards and Technology, 100 Bureau Drive, MS 8611, Gaithersburg,
Maryland 20899–8611, via fax at (301) 975–6122, or electronically by e-mail to
ncstac@nist.gov.
David B. Benson
QUOTE (OneWhiteEye+Dec 14 2007, 01:47 AM)
Slowly varying deviations. Practically speaking, very slow noise - one or two 'zero crossings' if any in a dataset. The way a car travels in a lane: the deviations from centerline plotted over a few miles might look like noise, but over a hundred meters - it's drift. Just what you might think it is.

Drift is then an example of red noise. Red noise is some effect treated as random but which is correlated with itself over 'short' periods of time.

One possible cause of red noise in the C447--9 data is the lensing effect of hot air pulses rising. Another possible effect, towards the end, is that the tower top is moving at about 27 m/s. That ought to cause turbulence in the air from the roof line to the antenna tower. Since the vortexes have pressure gradients, again (I think) there will be a lensing effect.

But more: the flowing avalanche resistive force implies that the equation I am solving, but also the tower itself may have slow, fairly small, instabilities in the solution and in nature. An extreme example, much greater than the effect here, is in a pulse jet engine, which I hope suffices to understand the nature of this less than perfectly smooth flow. This effect might explain some of the red noise deviations between the data and the calculated points.

Strictly speaking, it is something of a cheat to assume Gaussian noise when we know that it is red noise. However, if the red noise correlation with itself is of short duration (the case here), then naively assuming Gaussian noise only contaminates the Bayesian factor method slightly.

Taking everything said so far into account, later today I'll do a computer run of the favored hypotheses using the combined data, just to see what may be seen.
David B. Benson
QUOTE (OneWhiteEye+Dec 14 2007, 02:47 AM)
R3) even two pixels of deviation mixed up between three datasets causes significant rearrangement of rank in the Bayesian analysis

Let's hash out the matters above, then discuss this.

Rank rearrangement is actually a non-issue. What matters is the number of decibans down hypotheses stand in comparison to the best fit. That appears to change little enough that hypotheses disconfirmed by one data are (mostly) disconfirmed by all three. The one hypothesis for which this was not so was then discarded as too sensitive to exact data values.

I'm a bit lost. Which matters are those? Or did my previous post address (at least some of) them?
OneWhiteEye
QUOTE
Drift is then an example of red noise. Red noise is some effect treated as random but which is correlated with itself over 'short' periods of time.


I sincerely regret throwing out that example. I've had a grueling week and that was not my best writing. It's foolish to offer such a poor example when I've already rigorously and thoroughly characterized the effect in previous posts. It's insane to offer yet more explanations.

I'm exhausted, and next week promises to be worse. There's part of me that wants to carry this through, but the greater part of me says...

OK
David B. Benson
Another possible cause is the hat truss acting as a spring with the antenna tower riding 'up-n-down' on top of it, a motion superimposed on the general downtrend.

In any case, I am now going to s6tart working on determining the tilt. I have a plan. smile.gif

OneWhiteEye --- Don't over do it. There is no rush.
OneWhiteEye
QUOTE (David B. Benson+Dec 15 2007, 06:13 PM)
Another possible cause is the hat truss acting as a spring with the antenna tower riding 'up-n-down' on top of it, a motion superimposed on the general downtrend.

Distinct possibility. As well:

QUOTE
One possible cause of red noise in the C447--9 data is the lensing effect of hot air pulses rising. Another possible effect, towards the end, is that the tower top is moving at about 27 m/s. That ought to cause turbulence in the air from the roof line to the antenna tower. Since the vortexes have pressure gradients, again (I think) there will be a lensing effect.


No doubt. (edit: but I know they're all a lot less than the distortion I've been describing)

QUOTE (->
QUOTE
One possible cause of red noise in the C447--9 data is the lensing effect of hot air pulses rising. Another possible effect, towards the end, is that the tower top is moving at about 27 m/s. That ought to cause turbulence in the air from the roof line to the antenna tower. Since the vortexes have pressure gradients, again (I think) there will be a lensing effect.


No doubt. (edit: but I know they're all a lot less than the distortion I've been describing)

In any case, I am now going to s6tart working on determining the tilt.

Perhaps a little later I can post something on this, might be helpful. The first and simplest approximation is to assume the target motion deviates only infinitesimally from the optical axis, then use a calculation similar to your perspective correction.

The basic distinction is in using the differential form of the tangent relation (that is, use theta + delta_theta). First, take the difference band - dish for each frame. Then, for the resulting set, obtain the set of differences (p(t+i) - p(i)); this is simply the first difference wrt time, which is the result obtained when considering delta_y = Z(tan(theta + delta_theta) - tan(theta)).

Obviously, this is a crude approximation, but one does not need to know the actual distance between the antenna features, as the work is with differences. Knowing the rotation about a pivot below the roofline results in a translation (down and away) and equivalent rotation about the lower feature, it's a good guess that the translation will not matter too much as compared with the angle change. One can solve for the delta_theta for each frame and add it to the initial angle.
Trippy
Spending 12 hours a day crushing and grinding (Gold) ore samples for assaying has taught me a few things.

Like it doesn't take a whole lot of pressure to make schist fail explosively (What does seem to matter is power and impulse).

I do have a question though, based on something I observed last night. What was the average size of the debris in the dust cloud? (The dust particles I mean, I'll go into more detail later.

(edited to translate post into english).
David B. Benson
QUOTE (OneWhiteEye+Dec 15 2007, 11:35 AM)
The basic distinction is in using the differential form of the tangent relation (that is, use theta + delta_theta). First, take the difference band - dish for each frame. Then, for the resulting set, obtain the set of differences (p(t+i) - p(i)); this is simply the first difference wrt time, which is the result obtained when considering delta_y = Z(tan(theta + delta_theta) - tan(theta)).

Obviously, this is a crude approximation, but one does not need to know the actual distance between the antenna features, as the work is with differences. Knowing the rotation about a pivot below the roofline results in a translation (down and away) and equivalent rotation about the lower feature, it's a good guess that the translation will not matter too much as compared with the angle change. One can solve for the delta_theta for each frame and add it to the initial angle.

Interesting idea. smile.gif

But it would be helpful to have your estimate of the distance from the roofline to the dish(es).
lozenge124
NIST releases additional faq supplement:

http://wtc.nist.gov/pubs/factsheets/faqs_12_2007.htm

Question 1 is particularly interesting:
QUOTE
1. Was there enough gravitational energy present in the World Trade Center Towers to cause the collapse of the intact floors below the impact floors?  Why was the collapse of WTC 1 and 2 not arrested by the intact structure below the floors where columns first began to buckle?

Yes, there was more than enough gravitational load to cause the collapse of the floors below the level of collapse initiation in both WTC Towers.  The vertical capacity of the connections supporting an intact floor below the level of collapse was adequate to carry the load of 11 additional floors if the load was applied gradually and 6 additional floors if the load was applied suddenly (as was the case).  Since the number of floors above the approximate floor of collapse initiation exceeded six in each WTC Tower (12 and 29 floors, respectively), the floors below the level of collapse initiation were unable to resist the suddenly applied gravitational load from the upper floors of the buildings.  Details of this finding are provided below:

Consider a typical floor immediately below the level of collapse initiation and conservatively assume that the floor is still supported on all columns (i.e., the columns below the intact floor did not buckle or peel-off due to the failure of the columns above).  Consider further the truss seat connections between the primary floor trusses and the exterior wall columns or core columns.  The individual connection capacities ranged from 94,000 lb to 395,000 lb, with a total vertical load capacity for the connections on a typical floor of 29,000,000 lb (See Section 5.2.4 of NIST NCSTAR 1-6C).  The total floor area outside the core was approximately 31,000 ft2, and the average load on a floor under service conditions on September 11, 2001 was 80 lb/ft2.  Thus, the total vertical load on a floor outside the core can be estimated by multiplying the floor area (31,000 ft2) by the gravitational load (80 lb/ft2), which yields 2,500,000 lb (this is a conservative load estimate since it ignores the weight contribution of the heavier mechanical floors at the top of each WTC Tower).  By dividing the total vertical connection capacity (29,000,000 lb) of a floor by the total vertical load applied to the connections (2,500,000 lb), the number of floors that can be supported by an intact floor is calculated to be a total of 12 floors or 11 additional floors.

This simplified and conservative analysis indicates that the floor connections could have carried only a maximum of about 11 additional floors if the load from these floors were applied statically.  Even this number is (conservatively) high, since the load from above the collapsing floor is being applied suddenly.  Since the dynamic amplification factor for a suddenly applied load is 2, an intact floor below the level of collapse initiation could not have supported more than six floors.  Since the number of floors above the level where the collapse initiated, exceeded 6 for both towers (12 for WTC 1 and 29 for WTC 2), neither tower could have arrested the progression of collapse once collapse initiated.  In reality, the highest intact floor was about three (WTC 2) to six (WTC 1) floors below the level of collapse initiation.  Thus, more than the 12 to 29 floors reported above actually loaded the intact floor suddenly. 


Note, no reference to Bazant, no discussion of core & perimeter column strength, no discussion of upper block tilting, the column to column bolts, no discussion of "wedging", etc... According to NIST, you just need to consider 2 parameters to determine the inevitability of collapse past initiation: truss seat capacity & floor load! To anyone paying attention this is MIND-BOGGLING! Apparently, you can make a tube in tube structure with core & perimeter support columns of infinite capacity, and as long as you overload the truss seat connections of 1 floor the whole thing will come crashing down (progressively of course).

And just in case, you are thinking there has to be more to this, NIST cannot simply be waving this problem away like this! Check question 10.
QUOTE (->
QUOTE
1. Was there enough gravitational energy present in the World Trade Center Towers to cause the collapse of the intact floors below the impact floors?  Why was the collapse of WTC 1 and 2 not arrested by the intact structure below the floors where columns first began to buckle?

Yes, there was more than enough gravitational load to cause the collapse of the floors below the level of collapse initiation in both WTC Towers.  The vertical capacity of the connections supporting an intact floor below the level of collapse was adequate to carry the load of 11 additional floors if the load was applied gradually and 6 additional floors if the load was applied suddenly (as was the case).  Since the number of floors above the approximate floor of collapse initiation exceeded six in each WTC Tower (12 and 29 floors, respectively), the floors below the level of collapse initiation were unable to resist the suddenly applied gravitational load from the upper floors of the buildings.  Details of this finding are provided below:

Consider a typical floor immediately below the level of collapse initiation and conservatively assume that the floor is still supported on all columns (i.e., the columns below the intact floor did not buckle or peel-off due to the failure of the columns above).  Consider further the truss seat connections between the primary floor trusses and the exterior wall columns or core columns.  The individual connection capacities ranged from 94,000 lb to 395,000 lb, with a total vertical load capacity for the connections on a typical floor of 29,000,000 lb (See Section 5.2.4 of NIST NCSTAR 1-6C).  The total floor area outside the core was approximately 31,000 ft2, and the average load on a floor under service conditions on September 11, 2001 was 80 lb/ft2.  Thus, the total vertical load on a floor outside the core can be estimated by multiplying the floor area (31,000 ft2) by the gravitational load (80 lb/ft2), which yields 2,500,000 lb (this is a conservative load estimate since it ignores the weight contribution of the heavier mechanical floors at the top of each WTC Tower).  By dividing the total vertical connection capacity (29,000,000 lb) of a floor by the total vertical load applied to the connections (2,500,000 lb), the number of floors that can be supported by an intact floor is calculated to be a total of 12 floors or 11 additional floors.

This simplified and conservative analysis indicates that the floor connections could have carried only a maximum of about 11 additional floors if the load from these floors were applied statically.  Even this number is (conservatively) high, since the load from above the collapsing floor is being applied suddenly.  Since the dynamic amplification factor for a suddenly applied load is 2, an intact floor below the level of collapse initiation could not have supported more than six floors.  Since the number of floors above the level where the collapse initiated, exceeded 6 for both towers (12 for WTC 1 and 29 for WTC 2), neither tower could have arrested the progression of collapse once collapse initiated.  In reality, the highest intact floor was about three (WTC 2) to six (WTC 1) floors below the level of collapse initiation.  Thus, more than the 12 to 29 floors reported above actually loaded the intact floor suddenly. 


Note, no reference to Bazant, no discussion of core & perimeter column strength, no discussion of upper block tilting, the column to column bolts, no discussion of "wedging", etc... According to NIST, you just need to consider 2 parameters to determine the inevitability of collapse past initiation: truss seat capacity & floor load! To anyone paying attention this is MIND-BOGGLING! Apparently, you can make a tube in tube structure with core & perimeter support columns of infinite capacity, and as long as you overload the truss seat connections of 1 floor the whole thing will come crashing down (progressively of course).

And just in case, you are thinking there has to be more to this, NIST cannot simply be waving this problem away like this! Check question 10.
10. Why didn’t NIST fully model the collapse initiation and propagation of WTC Towers?

The first objective of the NIST Investigation included determining why and how WTC 1 and WTC 2 collapsed following the initial impacts of the aircraft (NIST NCSTAR 1).  Determining the sequence of events leading up to collapse initiation was critical to fulfilling this objective.  Once the collapse had begun, the propagation of the collapse was readily explained without the same complexity of modeling, as shown in the response to question #1 above.
(italics added)

unbelievable.
adoucette
QUOTE (lozenge124+Dec 15 2007, 10:54 PM)
According to NIST, you just need to consider 2 parameters to determine the inevitability of collapse past initiation: truss seat capacity & floor load! To anyone paying attention this is MIND-BOGGLING! Apparently, you can make a tube in tube structure with core & perimeter support columns of infinite capacity, and as long as you overload the truss seat connections of 1 floor the whole thing will come crashing down (progressively of course).


What's strange about that?

Its BY FAR a simpler description than Bazant (which does deal with the columns) and simply shows how ONE MANNER of progressive collapse of a structure built like the WTC towers can occur.

If you are going to argue that progressive collapse is impossible (as many CT'ers do) then explain what is wrong with this version of how a progressive collapse can occur.

Arthur
David B. Benson
QUOTE (lozenge124+Dec 15 2007, 08:54 PM)
According to NIST, you just need to consider 2 parameters to determine the inevitability of collapse past initiation: truss seat capacity & floor load!

Yes. I did a version of this as did shagster. NIST's seems simpler and quite clearly explained.
David B. Benson
QUOTE (lozenge124+Dec 15 2007, 08:54 PM)
Apparently, you can make a tube in tube structure with core & perimeter support columns of infinite capacity, and as long as you overload the truss seat connections of 1 floor the whole thing will come crashing down (progressively of course).

No. Only the office floors, i.e., outside the core. Bageling would occur and leave the exterior walls and the core intact.

Of course, that is not what occurred.
Nor what NIST is describing in the answer to question #1.
Trippy
QUOTE (lozenge124+Dec 16 2007, 04:54 PM)
Note, no reference to Bazant, no discussion of core & perimeter column strength, no discussion of upper block tilting, the column to column bolts, no discussion of "wedging", etc... According to NIST, you just need to consider 2 parameters to determine the inevitability of collapse past initiation: truss seat capacity & floor load! To anyone paying attention this is MIND-BOGGLING! Apparently, you can make a tube in tube structure with core & perimeter support columns of infinite capacity, and as long as you overload the truss seat connections of 1 floor the whole thing will come crashing down (progressively of course).

Think about what you're saying for a moment.

If I have a floor that weighs 10 ton, has a 100 ton capacity, and a 10% saftey margin. If I then load that floor up to 111 ton, and let it collapse on to the floor below it.

The floor below it now has 121 ton of weight on it, and only a 100 ton capacity, and is 11 ton over it's 10% saftey margin..

What, precisely do you think is going to happen next? (Especially if the second floor is carrying any load of it's own).
Grumpy
David B. Benson

QUOTE
Of course, that is not what occurred.
Nor what NIST is describing in the answer to question #1.


Sorry Doc., but that seems to be EXACTLY what happened, as I have describe numerous times before. I'm not arguing with your work in the energy equations but it does match the appearances of most of the floor supports(striped vertically), the expulsions of dust below the level of the crush front, the funneling of the floor debris inside the still intact outer frames into the basements and the banana peeling of the outer frames. Bagelling did not initiate collapse, but WAS a result of that initiation, leaving both the core and the outer frame without lateral support.

Grumpy cool.gif
David B. Benson
QUOTE (Grumpy+Dec 16 2007, 12:06 PM)
... but that seems to be EXACTLY what happened, ...

No. Neither the exterior walls nor the core columns were infinitely strong, as was lozenge124's supposition. The walls pealed apart and most of the core was crushed, in both cases.

Which is why I said that destruction of only the office floors was not observed.

We are in fundamental agreement, I am sure.
Grumpy
David B. Benson

Sorry, my bad.

Grumpy huh.gif
David B. Benson
QUOTE (Grumpy+Dec 16 2007, 12:46 PM)
Grumpy huh.gif

De nada.
David B. Benson
More realistic resistance force function does slightly better on C447 data:

CODE
Sef-SS+ZSS-F-exp-pow-stretch dB= 0.0 sd= 0.079

BV-SS+ZSS-F-exp-pow-stretch dB= 0.4 sd= 0.094
Sef-ZSS-F-exp-pow-stretch dB= 0.5 sd= 0.098


SS+ZSS-F denotes a force function form of

k0SS + k1(Z-Z0)SS

where S is the speed. The first term, with k0 quite small, is the air movement force. The second term, wih k1 rather large, is similar to the flowing snow avalanche resistance, but the mass (Z-Z0) is only the mass of the crushed materials. This term is due, in part, to the continued re-crushing of concrete and other materials.
OneWhiteEye
David B. Benson:

I've overlooked something simple in perspective correction; can't say if you have, also. I suspect so since I don't think you have the optical axis angle. Please inspect this diagram and see if you get what I mean without explanation, as my fatigue prevents giving one right now.

User posted image
http://i13.tinypic.com/6odidth.png
OneWhiteEye
User posted image
http://i3.tinypic.com/8bqei2o.png
David B. Benson
QUOTE (OneWhiteEye+Dec 16 2007, 08:04 PM)
User posted image

Yes, thank you. I should correct for that.
adoucette
QUOTE (OneWhiteEye+Dec 16 2007, 10:04 PM)
User posted image
http://i3.tinypic.com/8bqei2o.png

What was your estimate of the elevation of the camera?

Arthur
David B. Benson
Arthur --- The best estimate (so far) is that the camera was at (World Trade Center) ground level. This means the up-angle is about 13--15 degrees of arc.
Chainsaw,
Dear Dr. Benson do you know anyone who knows if the live decking of the floor pans was insulated?
Or where information can be found on the wiring system?
David B. Benson
QUOTE (Chainsaw,+Dec 17 2007, 02:45 PM)
... if the live decking of the floor pans was insulated?

Or where information can be found on the wiring system?

There is nothing in the NIST report (that I have seen) that suggests that it was. Considering the design, corrugated floor pans filled with concrete, it seems most unlikely.

No one single place, I fear. But generally, it was a mess. The main reason was the personal computers and servers, not planned for originally. So more wiring had to be added. Underneath the concrete (I don't know whether above or below the floor pans) were sheet metal conduits. Holes were drilled in the concrete to provide a wiring path between the workstation computers and the conduits.
David B. Benson
QUOTE (RealityCheck+Dec 17 2007, 03:34 PM)
How MANY such 'holes' were drilled through those concrete floor 'membranes'?

And we all know by now what firemen think about flames issuing from cable risers!

I don't know. At most one per workstation, but more likely is one for each four workstations.

The cable risers in the 1975(?) fire in WTC 1 were completely open holes in the corners. After that fire, surely insulating covers were placed so that there was no free air path from one floor to the next.
Chainsaw,
QUOTE (David B. Benson+Dec 17 2007, 10:48 PM)
I don't know. At most one per workstation, but more likely is one for each four workstations.

The cable risers in the 1975(?) fire in WTC 1 were completely open holes in the corners. After that fire, surely insulating covers were placed so that there was no free air path from one floor to the next.


Thank You Dr. Benson,
This is what I was looking into.

"The floor structure was then installed between the outer perimeter wall and the inner core. The floors also came in pre-assembled sections, consisting of 32-inch-deep (81-cm) trusses topped with a corrugated metal surface. To finish each floor, the crew would pour concrete over the metal surface and top it off with tile. The floor sections included pre-assembled ducts for phone lines and electrical cable, to make things easier for the electricians who would come in later. After the steel structure was in place, the crew attached the outer "skin" to the perimeter -- anodized aluminum, pre-cut into large panels. "
http://people.howstuffworks.com/wtc4.htm

When I modeled them in a fire I got a result like, this.

http://www.chiefmontagna.com/Articles/manhole%20fires.htm

The arcing, burning wire generates various toxic and combustible gases including high concentrations of carbon monoxide and neoprene gas.

The ignition can be explosive, sending the 300-pound manhole cover flying into the air. Manhole covers have been blown onto the roofs of six-story buildings and have gone up in the air only to come crashing down through the roofs of passing vehicles.

http://www.hhrobertson.com/4-code.cfm

For example, the Port Authority of New York and New Jersey prohibits the use of PVC in the World Trade Center. If installed, removal by the tenant is required.

I have done a few experiment on what would happen if these structures were non insulated and exposed to fire.

Arthur as usual thinks I am nuts.

David B. Benson
Ok, tomorrow I can put in the improved pixels-to-meters conversion, which will make a few percent difference over the current hack.

With the best resistive force function forms, the progressive collapse had a terminal speed of close to

67 m/s

assuming uniform density and no mass loss. I'm a bit surprised at this low rate. huh.gif
metamars

NYC Window Washer Survives a 47-Story Fall

Unfortunately, his brother died in the same fall.

http://www.foxnews.com/wires/2007Dec07/0,4...ccident,00.html
QUOTE
NYC Window Washer Dies in 47-Story Fall

NEW YORK —  A window washer fell 47 stories to his death and his brother was critically injured Friday when the scaffolding on a high-rise apartment building gave way, authorities said.

The brothers were getting onto the scaffolding from the roof of the 47-story building when the platform gave way, Fire Department spokesman Seth Andrews said.

"They apparently fell all the way from the top," Fire Department spokesman John Mulligan said.

Edgar Moreno, 30, of Linden N.J., was pronounced dead at the scene. His 37-year-old brother, whose name was not released, was in critical condition at a New York Weill Cornell Medical Center, officials said.




Anyone care to calculate his terminal velocity, and compare to the estimated terminal velocity in the WTC 1 & 2 collapses?

I wonder if the brother who died has his bones reduced to chips?

Well, not really.
gnik_isrever
It was a controlled demolition of all three buildings, and a missile which hit the pentagon.

It was under orders by George Bush and *** Cheney to prevent the National Economic Security And Reformation Act from being announced and implemented, and then to make us think the "ARABS" did it, so that we would support them in their choice to murder millions of innocent people.

If you think otherwise, I suggest you watch "loose change" videos, parts one, two, and the final cut. That video tells it all.

I do apologize if my reply is not really about the "physics" calculations you guys are doing, but I just thought it would be good for you guys to know the truth of the matter you are discussing.
OneWhiteEye
QUOTE (adoucette+Dec 17 2007, 06:39 PM)
What was your estimate of the elevation of the camera?

Arthur

Very interesting question. I don't know. There is more building footprint than ground in the region of possible locations for the camera. The buildings around there don't look too tall (the arrow is to the right of the building that also appears in the video):

User posted image
http://i5.tinypic.com/8avf9zl.png

User posted image
http://i4.tinypic.com/6q1m7f5.png

The camera could be in a window or on a roof. Maybe no higher than 80 feet?

I've done nothing with elevation as a parameter, yet. Got a pretty decent rendering with zero elevation a while back, but the aspect ratio was probably screwed up, squashing the vertical, so who knows? Eventually I want to revisit that using the floor heights you provided for vertical calibration. Once everything looks as close as possible from the ground, I'll play around with height, too. Brute force, but it will probably provide a fairly tight 3D bounding box for the camera location.

For now, camera x,y,z location, view angle, optical axis vector and even camera vertical orientation are all yet to be established with reasonable precision. Even if the location were documented, there would still be some uncertainty.

About the only thing I can say with some certainty is the optical axis intersects the NE corner at a point 123 feet below the roofline, give or take a few feet.
einsteen
QUOTE (adoucette+Dec 16 2007, 04:27 AM)
What's strange about that?

Its BY FAR a simpler description than Bazant (which does deal with the columns) and simply shows how ONE MANNER of progressive collapse of a structure built like the WTC towers can occur.

Arthur

I disagree, if you ignore it then it is a matter of pure momentum transfer. On the one hand they reject pancaking for initiation, but what does this story mean ? They talk about the load of 6 floors and add a factor 2 for dynamic load. They want to explain that the static load of the top section is already sufficient for a non-arrestable collapse, but that static load was already there for 30 years (and I understand this sounds like BS because there was no damage but that's not the reason I mention it). If one floor fails then it is clear that it will be stopped even with their factor 2, that's why they rejected pancaking for initiation. They say that the whole top section falls down, which we also observe from video, but let's look at it from a static mass's point of view. How do you want to place that top section on the next floor in order to let that one fail ? The perimeter columns and core columns will touch each other. Real funneling is only possible if the initiation is also pancaking. Their answers are blatant nonsense in my humble opinion.
einsteen
QUOTE (David B. Benson+Dec 17 2007, 06:34 PM)
Yes, thank you. I should correct for that.

I thought you already had that script for perspective correction, or am I confusing parallax ?
PhysOrg scientific forums are totally dedicated to science, physics, and technology. Besides topical forums such as nanotechnology, quantum physics, silicon and III-V technology, applied physics, materials, space and others, you can also join our news and publications discussions. We also provide an off-topic forum category. If you need specific help on a scientific problem or have a question related to physics or technology, visit the PhysOrg Forums. Here you’ll find experts from various fields online every day.
To quit out of "lo-fi" mode and return to the regular forums, please click here.