1 / 47

Addressing CATAC Questions on the TMT Alternate Sites Studies

Addressing CATAC Questions on the TMT Alternate Sites Studies. Matthias Schöck for the TMT DEOPS Group March 2017. General Comments on the ORM Site Data Analysis. In summary: We have access to a vast data set characterizing the conditions at ORM collected over decades

dohara
Télécharger la présentation

Addressing CATAC Questions on the TMT Alternate Sites Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Addressing CATAC Questionson theTMT Alternate Sites Studies Matthias Schöck for the TMT DEOPS Group March 2017

  2. General Comments on the ORM Site Data Analysis In summary: • We have access to a vast data set characterizing the conditions at ORM collected over decades • Our main effort last year was on testing the validity of these data and make them comparable to our own studies of Maunakea 13N, Armazones and all other TMT candidate sites • We have a very high level of confidence in the comparison of site characteristics between the different sites

  3. Turbulence MeasurementsInstruments Just a quick primer on the instruments mentioned in the following: • DIMM – Differential Image Motion Monitor • Differential image motion measurement of single star in two apertures of ~35-cm telescope • Measures integrated seeing only • MASS – Multi-Aperture Scintillation Sensor • Scintillation measurements in 4 concentric apertures • Can be mounted on same telescope as DIMM, but is separate instrument • Low resolution (6 layers) profile of turbulence excluding the ground layer -> seeing, isoplanatic angle • Scintillation measurement also produces coherence time • SODAR – Sound Detection and Ranging • Acoustic sounders • High-resolution profiles of ground layer turbulence strength and wind velocity • 10 – 800 m elevation, 5 – 20 m resolution • SCIDAR – Scintillation Detection and Ranging • Scintillation measurement along crossed paths toward binary star • Full turbulence profiles with resolution of a few hundreds of meters -> seeing, isoplanatic angle • In principle, SCIDARs can also measure wind profiles (which gives the coherence time), but the ORM SCIDAR is not yet set up to do so • Requires 1 – 2 m telescope

  4. IRMA Water Vapor Radiometers All-Sky Cameras Weather Stations SODAR Acoustic Sounders Original TMT Site Testing Instruments Dust Sensors MASS/DIMM Telescopes Sonic Anemometers

  5. ORM Site Mapwith Site Testing Stations

  6. General Comments on the ORM Site Data Analysis • As part of the alternate sites solicitation, IAC provided a report describing the available information for ORM • TMT Site Testing Team spent more than 6 months of intense effort on: • Understanding the available data • Determining which are most comprehensive and best suited for comparison with results for the site we tested ourselves • Analyzing the raw data • Comparison with other existing data sets / results and search for (in)consistencies • This includes data from independent sources • Sensitivity studies of effect of uncertainties when comparing with original candidate sites • Discussing the analysis / results with TMT astronomers (SAC) and providing input to Board discussions • Also done: • Discussions with and feedback from users of ORM and teams from other (incl. non-IAC) observatories • Collaborative studies of site conditions and operational experiences with teams from other (incl. non-IAC) observatories

  7. Summary of Site Parameters • Bottom-line summary results used in the TMT alternate sites assessment • Rest of the slides discuss their comparison and validity by answering the set of questions sent by CATAC chair • We can spend as much or as little time on any part of it as you wish, I’d prefer to focus on discussion

  8. Question: Why are we using 60-m seeing for site comparison? • Detailed computational fluid dynamics (CFD) simulations done during the original TMT site testing • Sites themselves and interior of the enclosure • Validation, e.g. with the site testing measurements, scintillometer measurements inside CFHT and Keck dome • See “Local thermal seeing modeling validation through observatory measurements,” K. Vogiatzis et al., Proc. SPIE 8449 (2012) for CFHT, Keck validation work • Result: Seeing inside the enclosure (dome and mirror seeing): • Dominated by the enclosure, mirrors, structure • Pretty much independent of the outside conditions (esp. since we have adjustable vents) • This actually extends to some distance above the enclosure • Top of TMT enclosure: 56 m • Whether we use 55 or 60 m here, or even 50 or 65, makes no significant difference and the seeing data are not accurate on this level anyway • For site comparisons, we can (and, in fact, should) ignore outside seeing below 60 meters • NFIRAOS simulations include dome / mirror seeing contribution from CFD results

  9. Related question: Why can we not use the 7-m seeing? High resolution ground layer turbulence profiles from acoustic sounders (SODARs) from TMT site testing (SODAR data are absolute measurements, not scaled to match DIMM data; MASS/DIMM values for simultaneous measurements with SODAR only, not exactly the same as long-term statistics) 13N

  10. Related question: Why can we not use the 7-m seeing? • MCAO performance is very much non-linear and depends on the shape and absolute values of the turbulence profile • We need to use the seeing which NFIRAOS will encounter, not the outside value measured from a height close to the ground • Both ground layer and free atmosphere seeing, and their ratio, differ between the sites • Cannot use either one and scale, need to interpolate between the two • Ground layer turbulence profile shape also depends on site topography • It is similar among the mountain-top sites (Armazones & Tolonchar), and among the “plateau/slope” sites (Maunakea 13N, SPM & ORM) • ORM profile likely closer to the MK 13N/SPM than the Armazones/Tolonchar • We use same interpolation as for MK 13N • If ORM turbulence profile were closer to an Armazones-type profile, a larger fraction of the ground layer would be below 60 m, in which case this would be a conservative estimate • Just as a side note FYI, SODAR profiles are not at all a standard product of site testing campaigns. The original TMT candidate sites are among very few sites for which this information exists at all. • We should not consider not having SODAR profiles as a prohibitive lack of information for AO performance analyses • MCAO and ExAO systems correct larger fraction of ground layer than of free atmosphere turbulence, error propagation comparatively benign w.r.t. uncertainties • A bit more on this in the backup slides

  11. Example: Armazones is a mountain top site Fun fact: the little white speck on the summit is our site testing telescope

  12. Example: Maunakea 13N is a plateau/slope site MASS/DIMM telescope SODARs Weather station Solar panels

  13. ORM Aerial Photo Example: ORM is a plateau/slope site

  14. ORM Turbulence Data • Site testing has been going on for decades at ORM • Lots of different data sets available • No long-time MASS/DIMM data set for direct comparison with other TMT sites • Because we need the 60-m seeing, we need to work with turbulence profiles • DIMM data are only used for (successful) consistency checks • Best available data set: SCIDAR data covering >5 years, almost 200,000 data points • SCIDAR profiles are actually more accurate than MASS profiles for AO performance analyses (because of the higher vertical resolution), but: • Need to compare to MASS data from other sites ➞ reduce SCIDAR data to MASS resolution • Also, AO analysis tools set up to use MASS profiles • Comparison with other site testing data sets and AO performance from observatories are all consistent • Using same interpolation to 60m seeing as for Maunakea 13N • This is done on a point-by-point basis, assembling statistics afterward • But using statistics gives almost identical results (yes, we verified all of that) • All distributions very close to log-normal once sufficient data are available • N.B: Accuracy of (high quality) turbulence measurements is o(10%)

  15. Question: Seeing distributionsAre 7m and 60m seeing correlated at all times? • All distributions of turbulence parameters at ORM as derived from SCIDAR data are log normal, as they should be ORM 60m seeing ORM Isoplanatic angle ORM 7m seeing

  16. Question: Seeing distributionsAre 7m and 60m seeing correlated at all times? • ORM 7m vs. 60m seeing • 7m and 60m correlated, of course, because large part of atmosphere is the same, but not proportional to each other at all times • The reason is that ground layer and free atmosphere turbulence are only partly correlated

  17. ORM Monthly Seeing Variations • Monthly medians of 60-m seeing at ORM • Seeing and isoplanatic angle are best in summer, worst in winter

  18. ORM Monthly Seeing Variations • Monthly medians of isoplanatic angle at ORM • Seeing and isoplanatic angle are best in summer, worst in winter

  19. Question: Isoplanatic Angleand Coherence Time • Isoplanatic angle: SCIDAR provides reliable estimate • GL does not matter at all • We use MASS-resolution profiles from SCIDARs for comparison with other sites • There is no question that the coherence time is large at ORM • This has been shown over and over again • 200 mbar wind speed (see next slide) • Weak high-elevation turbulence • Consistent with existing measurements • No time series of τ0 measurements simultaneous with SCIDAR profiles available • Using estimate of average τ0 for all profiles for AO performance simulations • Some uncertainty on exact value, but: • Undoubtedly longer than at Chilean sites and probably a bit shorter than Maunakea • Sensitivity and “inverse” analyses show that this has a small effect on NFIRAOS performance • 6 ms is likely conservative estimate compared to other sites

  20. 200 mbar wind speed Published 200 mbar wind speeds Our own analysis of radiosonde data Two main conclusions: • Data published by IAC in agreement with our analysis • This is also true for the sites not shown here • There is no significant trend in the long-term statistics

  21. Precipitable Water Vapor ORM PWV derived from radiosonde measurements. Published GPS measurements believed to be slight under-estimate. Maunakea values from 225 GHz radiometers.

  22. Question: Precipitable Water Vapor (PWV) Monthly distribution of PWV from June 2001 to December 2008 (from Garcia-Lorenzo 2010). Note that the absolute calibration of these data is slightly (~10%) different from what we are using for the overall statistics, but the month-to-month variations are the same. PWV values are best in winter and early spring, worst in summer and early fall.

  23. Cloud Cover and Usable Time(from 2008 analysis) • Very intensive analysis done in 2008 for the original candidate sites • Using All-Sky Camera (ASCA) images, MASS transparency, satellite and weather station data; cross-comparing all of these • Satellite data anchored / calibrated using ASCA images for simultaneous data • Can then use longer time series from satellite data analysis • Absolute values depend on data used • E.g. definitions of clear/usable conditions or observatory shutdown conditions • Inherent errors of each method, operational problems • Length of available data set (are results representative of long-term conditions?) • … • Results from different methods, while quantitatively different, are usually well correlated, with scatter on the 3-5% level • Exception: we found DIMM operational statistics to be unreliable • Take-away: results may not literally be “clear” or “usable” time by the English-language meanings of these words, but for comparing sites different methods agree on the ~5% level or better • Use of satellite data means that we have equivalent data for all sites • So that’s what we did for the original candidate sites

  24. Bottom Line Results(from 2008 analysis) “Clear time” from the Erasmus satellite studies: • Satellite data cover longer periods than on-site measurements (all-sky camera (ASCA), MASS transparency, weather station) at the sites • Satellite data were extensively validated using these (and other, in fact) ground observations • Additional time lost comes from simultaneous ASCA and weather station measurements • Precision is on the few percent level

  25. ORM Information: Cloud Cover(from the IAC Sky Quality Team) • We do not have equivalent data for ORM (remember, we need simultaneous weather and cloud time series, not just statistics) • Usable time is estimated from observatory weather loss statistics Note: TMT “performance conditions” requirements is 18 m/s = 65 km/h, similar to LT.

  26. ORM Information: Usable Time(from the IAC Sky Quality Team) Monthly variation of usable time at ORM

  27. ORM Information: Usable Time(from the IAC Sky Quality Team) • Advantage of the observatory weather loss statistics: • They automatically combine clouds and all other shutdown conditions (wind, high humidity and strong dust events) • Statistics presented are derived from observing logs • Technical downtime accounted for • Disadvantages: • Shutdown conditions are different from observatory to observatory (incl. TMT) • ATC (a small, 0.18 m, automated telescope) observes in close to all clear conditions (shown by its 79% uptime – consistent with satellite results, which are 77-82%) • Wind speed limits for LT and TNG are lower than for TMT, higher at WHT and NOT • Not quite the same type of statistics as what we used for our other candidate sites, but it’s the closest we have, and they are consistent with everything else we have looked at (in fact, result might be pessimistic compared to satellite data used for other sites) • Usable time for TMT at ORM will likely be similar to the large telescopes there, i.e., in the 70-74% range (using 72% in our site merit function) • Corroborated by our own analysis of CMT observing logs: agrees to <2% • “Manual” analysis of continuous set of >5 years of observing logs • This includes accounting for weather loss during technical down time • Corroborated by recent analysis done by CTA

  28. Usable Time Metric Summary

  29. Question: Are there indications of climate change affecting the sites? Last year, we contracted a climate study with Eddie Graham • Meteorologist and Climatologist who has worked extensively on the effect of the climate on astronomical sites (e.g. with ESO) • Investigation of changes of Hadley cells and PWV

  30. Question: Are there indications of climate change affecting the sites? Last year, we contracted a climate study with Eddie Graham • Meteorologist and Climatologist who has worked extensively on the effect of the climate on astronomical sites (e.g. with ESO) • Investigation of changes of Hadley cells and PWV

  31. Question: Are there indications of climate change affecting the sites? Last year, we contracted a climate study with Eddie Graham • Meteorologist and Climatologist who has worked extensively on the effect of the climate on astronomical sites (e.g. with ESO) • Investigation of changes of Hadley cells and PWV • The Hadley cells are shifting and changing size, however, the amount expected over the next 50 years (if the current trend continues) is small compared to the extent of the cells • The strengths of the Hadley cells show no shift • If anything, the cell over the Canaries seems to be more stable than at the other sites • There are no long-term significant trends in PWV at any of the candidate sites • This is consistent with Andre Erasmus’ results for the original candidate sites Two direct screen grabs from the conclusions of the final report:

  32. Question: Effect of CTA on wind and turbulence at the site Addressed by two sets of computational fluid dynamics (CFD) simulations: • CFD study of the effect of CTA on the existing observatories • Addresses this question directly • Found negligible effect, if any, on any of the existing observatories • TMT site is quite a bit farther from CTA site than most current observatories • CFD study of effect of TMT on GTC • Found negligible effect of TMT on GTC • TMT is much closer to GTC than to CTA • TMT enclosure is larger than any of the CTA dishes • This same study was also used to identify the best of the available sites for TMT

  33. Ground-level Dust and Extinction • Sources of dust measurements • MK 13N, Tolonchar, SPM, Armazones (~2.5 years at each site) • TMT site testing. Commercial dust sensor at 7m. Measurements every 5 to 7 mins • ORM (9yr 5mn of measurements every 2 hours) • Commercial dust sensor (different model but same specs as above) • External inlet at 11m on TNG enclosure • Extinction • Curves are models by Chuck Steidel. Consistent with his own calibration measurements at ORM and MK • Black dots are median extinction levels from 10 years of nightly photometric calibration measurements • X is the 0.132 mags/airmass from 30 years of CMT measurements, excluding the part of high extinction due to Mount Pinatubo • Triangle is V band extinction on Gemini North website of 0.12 mas/airmass

  34. Dust mass concentration probability distribution (Dust mass calculated from particle size distribution)

  35. Extinction

  36. C. Buton et al., 2013

  37. Summary of Site Parameters AO merit function: Strehl2 • with Strehl = exp(-σ2) • σ : wavefront error (WFE) • Turbulence contributions: fitting, bandwidth and isoplanatismerrors WFE for all candidate sites from full end-to-end simulation of NFIRAOS using measured profiles More information in the backup slides

  38. Summary • ORM site results used in the TMT alternate site studies obtained through • Our own analyses (same team that did the original TMT site testing) • Validation with as many other data sets as possible • Sensitivity and inverse studies • Simulations (AO, computational fluid dynamics, …) • Interactions with observatory operators, users, … • Interactive process with the TMT SAC in several day-long meetings • SAC “signed off” on our analyses • Discussions usually focused on effect of the site characteristics • As a side note: ORM’s conditions are quite similar to LCO, but ORM is superior for some AO science cases (due to better free-atmosphere seeing, isoplanatic angle and coherence time) and thermal regime (cooler site).

  39. Backup Slides

  40. Adaptive Optics Turbulence Metrics Form of metric: Strehl2 • with Strehl= exp(-σ2) • σ : wavefront error (WFE) in radians • In principle, this needs to include implementation and NGS controlled low-order modes • However, normalizing to the best site is mathematically equivalent to only using the incremental WFE with respect to that site • Best site: Maunakea 13N for AO performance, because of low free-atmosphere turbulence strength and large isoplanatic angle • σ2 ~ λ-2 • Need to evaluate this at a variety of wavelengths • Using J (1.22 µm), H (1.63 µm), K (2.19 µm)

  41. Adaptive Optics Turbulence Metrics Wavefront error is calculated by two methods (but only Method 2 is used in the final results): 1. From measured turbulence parameters: σ2 = σfitting2 + σbandwidth2 + σisopl2 • Fitting error: σfitting2 ~ r05/3 • Bandwidth error: σbandwidth2 ~ τ05/3 • Isoplanatism error: σisopl2 ~ θ25/3 • Note that this is θ2 , not θ0 : taking the 2-DM correction of NFIRAOS into account • On-axis results by setting σisopl2 = 0 2. Full NFIRAOS simulations: • Use σ2 from above only to define representative profiles • Run these profiles through the AO group’s MAOS simulations

  42. Ground level dust concentrations

  43. Mirror degradation • Gemini testing at Pachon • Bare Al lost 0.03%/day, protected silver 0.06%/day without any cleaning • Both restored to 100% after wet cleaning – no surface degradation • Liverpool telescope (bare Al) experience at ORM • 0.1%/day in between CO2 cleaning, 0.04% on average with CO2cleaning on 6 week timescale • CTA testing (overcoated Al at SPM, Armazones, Teide) • %/day – 0.015 (SPM), ~0.02 (Armazones), ~0.01 (Teide) • The impact on operations of ground level dust at ORM is much less of a concern than anecdotal reports would lead one to believe (TMT site testing report conclusion, W. Skidmore et al., 2016)

  44. Source of Las Campanas (LCO)Information in SMF Spreadsheet • Usable time: 75% • Clear fraction: 78% taken from same Erasmus report from which we take the clear fraction for the other original TMT candidate sites. • Additional time lost: use 3%, same as for Armazones • Turbulence parameters: Using Armazones values. Consistent with measurements from GMT site selection campaign. • NIR sensitivity: Estimate based on the Tolar (0.68) and Tololo (0.65) values in the Cohen report. These two sites have similar average temperature, but are a little lower. • PWV: Estimated from GMT site testing paper which quotes 25% value as 2.1 mm. • Mean night time temperature: From GMT site selection report.

  45. Some More Thoughts onTurbulence Profiles Not meant as an exhaustive list, only as discussion input • Use of 60-m profiles • There are larger differences between sites in the ground layer than in the free atmosphere • Differences at 60-m would be smaller than at 7-m, even if the profile shapes were the same • We are not saying that there is no turbulence contribution from below 60 m, but that the strength of that turbulence is dominated by the enclosure and telescope, not the site • Contribution from below 60-m is therefore close to the same for all sites • NFIRAOS simulations take this into account • If we’re underestimating the dome/mirror seeing contribution, relative differences between sites decrease • MCAO/ExAO systems have an easier time correcting the ground layer • One deformable mirror is conjugate to ground layer, free atmosphere turbulence elevation changes • Ground layer turbulence is slower than free atmosphere seeing

  46. Discussions with ORMSite Testers and Users In addition to detailed discussions with the IAC site testing team, we had many additional discussions with ORM site testers and users. For example: • Trip to ESO in Garching, Germany for full-day discussion with Marc Sarazin (ESO’s site testing lead) • Discussions with Jacques Sebag (LSST site testing lead) in Tucson • Information from and discussions with the CTA project staff • In person discussions with the Durham AO and site testing group at the SPIE meeting in Edinburgh and by email afterward • Meetings with the staffs of pretty much all observatories at ORM • Input from astronomers within TMT and further afield who have observed at ORM

  47. Acknowledgments The TMT Project gratefully acknowledges the support of the TMT collaborating institutions. They are the Association of Canadian Universities for Research in Astronomy (ACURA), the California Institute of Technology, the University of California, the National Astronomical Observatory of Japan, the National Astronomical Observatories of China and their consortium partners, and the Department of Science and Technology of India and their supported institutes. This work was supported as well by the Gordon and Betty Moore Foundation, the Canada Foundation for Innovation, the Ontario Ministry of Research and Innovation, the National Research Council of Canada, the Natural Sciences and Engineering Research Council of Canada, the British Columbia Knowledge Development Fund, the Association of Universities for Research in Astronomy (AURA) and the U.S. National Science Foundation.

More Related