AbstractAbstract
[en] CMS physicists need to seamlessly access their experimental data and results, independent of location and storage medium, in order to focus on the exploration for the new physics signals rather than the complexities of worldwide data management. In order to achieve this goal, CMS has adopted a tiered worldwide computing model which will incorporate emerging Grid technology. CMS has started to use Grid tools for data processing, replication and migration. Important Grid components are expected to be delivered by the Data Grid projects, like EU DataGrid, PPDG and GriPhyN. As part of the activity of interfacing with these projects, CMS has created a set of long-term requirements to the Grid projects. These requirements are presented and discussed
Primary Subject
Secondary Subject
Source
Chen, H.S. (ed.) (Chinese Academy of Sciences, Beijing (CN). Inst. of High Enegy Physics); 757 p; 2001; p. 754-757; CHEP 2001: international conference on computing in high energy and nuclear physics; Beijing (China); 3-7 Sep 2001; Available from China Nuclear Information Centre
Record Type
Miscellaneous
Literature Type
Conference
Country of publication
Reference NumberReference Number
Related RecordRelated Record
INIS VolumeINIS Volume
INIS IssueINIS Issue
AbstractAbstract
[en] The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment. The authors describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client. The analysis environment is based on existing HEP (Anaphe) and CMS (CARF, ORCA, IGUANA) software technology on the server accessed from a variety of clients. A Java Analysis Studio (JAS, from SLAC) plug-in is being developed as a reference client. The server is operated as a 'black box' on the proto-Tier2 system. ORCA objectivity databases (e.g. an existing large CMS Muon sample) are hosted on the master and slave nodes, and remote clients can request processing of queries across the server nodes, and get the histogram results returned and rendered in the client. The server is implemented using pure C++, and use XML-RPC as a language-neutral transport. This has several benefits, including much better scalability, better integration with CARF-ORCA, and importantly, makes the work directly useful to other non-Java general-purpose analysis and presentation tools such as Hippodraw, Lizard, or ROOT
Primary Subject
Secondary Subject
Source
Chen, H.S. (ed.) (Chinese Academy of Sciences, Beijing (China). Inst. of High Energy Physics); 757 p; 2001; p. 186-189; CHEP 2001: international conference on computing in high energy and nuclear physics; Beijing (China); 3-7 Sep 2001; Available from China Nuclear Information Centre
Record Type
Miscellaneous
Literature Type
Conference
Country of publication
Reference NumberReference Number
Related RecordRelated Record
INIS VolumeINIS Volume
INIS IssueINIS Issue
AbstractAbstract
[en] The Terabyte Analysis Machine Project is developing hardware and software to analyze Terabyte scale datasets. the Distance Machine framework provides facilities to flexibly interface application specific indexing and partitioning algorithms to large scientific databases
Primary Subject
Secondary Subject
Source
Chen, H.S. (ed.) (Chinese Academy of Sciences, Beijing (China). Inst. of High Energy Physics); 757 p; 2001; p. 93-94; CHEP 2001: international conference on computing in high energy and nuclear physics; Beijing (China); 3-7 Sep 2001; Available from China Nuclear Information Centre
Record Type
Miscellaneous
Literature Type
Conference
Country of publication
Reference NumberReference Number
Related RecordRelated Record
INIS VolumeINIS Volume
INIS IssueINIS Issue
Bunn, J.J.; Hickey, T.M.; Holtman, K.; Legrand, I.; Litvin, V.; Newman, H.B.; Samar, A.; Singh, S.; Steenberg, C.; Wilkinson, R.; Stockinger, K.; Amundson, J.; Bauerdick, L.A.T.; Gaines, I.; Graham, G.; Muzaffar, V.S.; O'Dell, V.; Wenzel, H.; Stickland, D.; Wildish, T.; Branson, J.; Clare, R.; Avery, P.
Proceedings of CHEP 20012001
Proceedings of CHEP 20012001
AbstractAbstract
[en] The CMS groups in the USA are actively involved in several grid-related projects, including the DOE-funded Particle Physics Data Grid (PPDG) and the NSF-funded Grid Physics Network (GriPhyN). The authors present developments of: the Grid Data Management Pilot (GDMP) software; a Java Analysis Studio-based prototype remote analysis service for CMS data; tools for automating job submission schemes for large scale distributed simulation and reconstruction runs for CMS; modeling and development of job scheduling schemes using the MONARC toolkit; a robust execution service for distributed processors. The deployment and use of these tools at prototype Tier1 and Tier2 computing centers in the USA is described
Primary Subject
Secondary Subject
Source
Chen, H.S. (ed.) (Chinese Academy of Sciences, Beijing (CN). Inst. of High Enegy Physics); 757 p; 2001; p. 750-753; CHEP 2001: international conference on computing in high energy and nuclear physics; Beijing (China); 3-7 Sep 2001; Available from China Nuclear Information Centre
Record Type
Miscellaneous
Literature Type
Conference
Country of publication
Reference NumberReference Number
Related RecordRelated Record
INIS VolumeINIS Volume
INIS IssueINIS Issue