CompositesWorld

JUL 2017

CompositesWorld

Issue link: https://cw.epubxp.com/i/841219

Contents of this Issue

Navigation

Page 11 of 51

10 CompositesWorld DESIGN & TESTING JULY 2017 measured after 1,000K cycles and 1,500K cycles. e purpose of the higher cycle count tests was to assess the ability of the PDA codes to predict stiffness degradation resulting from increasing levels of damage from continued fatigue loading, or to predict the life of the specimen if two-part failure occurred. e [0°/45°/90°/-45°] 2S speci- mens resulted in two-part failure prior to 2,000K cycles while the other two layups experienced a stiffness reduction but not two-part failure. is result proved to be a good indicator of the PDA capabili- ties, because many codes predicted two-part failure in some cases where the physical test specimens did not completely fail. To enable observers to assess the ability of the PDA methods to capture the correct damage type and location in the open-hole fatigue specimens, X-ray tomographs were obtained intermittently during the physical fatigue testing portion of the program. "Reca- librated" models captured the location of damage relatively accu- rately, and the discrete damage models, BSAM and DCN, due to the discrete damage nature of their formulations, more distinctly captured the narrow features of matrix cracks than the other models. e fatigue blind prediction and recalibration comparisons, in terms of average percent error, are presented in Tables 2 and 3. Table 2 shows the overall average residual stiffness/strength percent error across all teams and all layups, as a function of different cycle points. For the fatigue results, on average, the blind predictions of residual properties after fatigue differed from the test by 42%, and the recali- brations differed from the test by 18%. For a given layup, PDA participants' blind fatigue predictions generally correlated better to the physical test results at low cycle counts, with the poorest correlations seen after cycle counts greater than 1,500K. As Table 3 demonstrates clearly, the [60°/0°/-60°] 3S layup showed by far the poorest correlation between the blind and recalibrated predictions and the physical test data. (It should be noted that in the static phase of the AFRL benchmarking exercise, the blind predictions differed from the test by 18%, and the recali- brations differed by 8%. us, the fatigue simulations differed from test more than the static simulations by a general factor of two.) Conclusions It is important to state one finding relative to a lack of capability found in many of the micromechanics codes. It was discovered that most of the codes were in the initial stages of development of fatigue capabilities (during the program timeframe), and did not yet have discrete interlaminar fatigue capability. However, some micro- mechanics codes attempted to simulate delaminations through element stiffness degradation. is capability is very important for many of the applicable aircraft problems. AFRL and Lockheed ABOUT THE AUTHOR Dr. Stephen Clay is a principal aerospace engineer for the Air Force Research Laboratory (AFRL, Wright Patterson AFB, OH, US). After graduating with a BSME from the West Virgina Institute of Technology in 1992, he joined AFRL to conduct research on polymeric canopies and later focused on composite structures. Clay has an MS and Ph.D in engineering mechanics from Virginia Polytechnic Institute and State University and is an Associate Fellow and active member of the American Institute of Aeronautics and Astronautics (AIAA, Reston, VA, US). Martin are interested in assessing the current state-of-the-art and determined this area to be a technology gap. All the PDA teams had more issues with residual stiffness/strength predictions after fatigue for the [60°/0°/-60°] 3S layup than they did with the other two layups, both for the blind predictions and recali- brated results. In particular, many of the blind predictions failed to capture the correct fatigue failure mechanisms for the [60°/0°/-60°] 3S layup. Note that this fatigue benchmark exercise was for an R-ratio of 0.1 (all tension) constant amplitude cycling. Much additional work is needed to handle compressive and spectrum loading. To improve the "predictiveness" and maturity of these fatigue analysis tools, a more extensive set of experiments and associated modeling studies should be designed to handle these general spectrum loading effects. For the fatigue results, the blind predictions differed on average from the test by 42%, and the recalibrations differed by 18%. us, for fatigue, it was concluded that the current accuracy of the fatigue PDA tools is low for predicting the response of these notched R=0.1 constant amplitude tests. ere are many formulations with a wide variation in the level of verification and validation for each. Improving these fatigue formulations is a necessity if PDA tools are to be used to accurately predict the effects of fatigue on aircraft structures. TABLE 3 Average of errors reported by all teams for each layup After "x" cycles Tension or compres- sion Bind or Recalibrated [0/45/90/- 45]2s (% error) [60/0/- 60]3s (% error) 30/60/90/- 60/-30]2s (% error) Residual strength 200K/ 300K Tension Blind 16 74 26 Recalibrated 13 35 9 Compres- sion Blind 15 69 33 Recalibrated 10 30 15 Residual stiffness 200K/ 300K Tension Blind 10 72 21 Recalibrated 4 23 5 Compres- sion Blind 19 65 22 Recalibrated 16 24 12 Residual stiffness 1,000K Tension Blind 30 82 21 Recalibrated 9 37 5 Residual stiffness 1,500K Tension Blind 60 92 35 Recalibrated 37 37 5 TABLE 2 Average error reported for all layups by all teams Strength (% error) Stiffness (% error) Overall average %error After "x" cycles 200K/300K 200K/300K 1,000K 1,500K Blind 39 35 44 62 42 Recalibrated 19 14 17 26 18

Articles in this issue

Archives of this issue

view archives of CompositesWorld - JUL 2017