MUSE Test Suite for
Full Service End-to-End Analysis of Access Solutions

[MUSE]

The demand for delivering high bandwidth multi-media applications to end users, via multiple service providers, network operators and technologies, has driven the world wide development of a new generation of access networks.


Key Features


  • A wide range of “knowledge tools” for analyzing the strong and weak points of an access network.

  • A single source of reference to relevant standards, methods, techniques and terminology.

  • State-of-the-art.

  • Wide scope: covers the full OSI stack and much more:

  • Offers both in-depth and explanatory information.


State-of-the-art access networks have to meet so many requirements that they have become very complex. To verify if these networks are adequate for their purpose, you cannot limit yourself to a set of isolated tests on individual devices. In those cases a full service end-to-end approach should be followed to test the system as a whole. So how to analyse the most relevant capabilities and shortcomings of such a system?

Answering this question becomes relevant, for instance, when someone needs to make migration decisions toward new network solutions. In such a case he/she wants to benchmark one solution against many alternatives. Usually, this starts with defining functional requirements, followed by the creation of RFI/RFQ documents.
But do those requirements address the problem as a whole and in sufficient detail? And how to find the most relevant selection criteria, how to identify which additional tests are needed, how to interpret these test results, and how to transform that into a well balanced evaluation of the offered solution? In those cases, you definitely could use some help.

These answers are not obvious. If a system has to deliver services to end users via channels that are sometimes overloaded with other traffic, or via unreliable channels, the system may be adequate for downloading video files but inadequate for streaming video. This emphasises the need for a full-service end-to-end approach to evaluate a system.

The solution

Until the public release of the MUSE Test Suite, no single source of reference existed to analyse a system as a whole, from so many technological viewpoints. The guidance from this Test Suite offers you a holistic top-down view on what characteristics of a system are really relevant, and how you can identify them.

Some of the details can also be found in standards. This may be true for testing physical layer aspects of a system or for testing individual elements or sub systems. However, when higher layer characteristics are to be analysed, the required information becomes scarce or is not available. And since there are so many standards out there, where should one start?

The MUSE Test Suite was designed specifically to address such challenges: what to test and how to test. It is a comprehensive document that provides you with a holistic system view on testing and to evaluate the capabilities of access systems. It describes a wide range of test objectives in significant detail, and guides the reader through state-of-the art views and many standards (when appropriate).



Contents


A test suite in two parts:

Part 1, is on test objectives, and identifies what to test (over 300 pages).
Part 2, is on test methods, and identifies how to test (over 375 pages).
The first part provides lots of guidance to explain the network characteristics that should be analysed, and to summarize what information is already available in various standards. It outlines several classification and description models of access networks from different perspectives (the well-known OSI layer model can cover only a limited set of functionalities). It summarizes a variety of service requirements as well as to introduce the various chapters on testing.
The second part discusses detailed test procedures and methods to analyze the characteristics identified in part 1.

Quality of service

A reference for the assessment of the QoS for various services, as perceived by users. The first part describes which aspects should be tested when assessing the performance of VoIP, Streaming Video, Videoconference, Interactive Gaming en web browsing. The second part discusses, in detail, tests for obtaining the perceived QoS for these services.
Read more ....

Service connectivity

A top-down view on connectivity, starting from a service point of view. Multiple classes of services are described and for each class an example service (e.g. high speed internet, multicast streaming, VoIP) has been used to show how service testing can be done from the service down to the network layer. Of course, this chapter is not exhaustive on all services.
Table of contents of part 1 and part 2

Network connectivity

A bottom-up view on connectivity, starting from the architecture. It describes relevant tests and standards for the network layer and upward. It concentrates on functionality and performance tests without detailing on specific test procedures; it excludes the physical layer.
As it is focused specifically on the MUSE architecture, it is a lot more specific than the chapters on service connectivity.
Table of contents of part 1 and part 2.

Connectivity testing of RGW’s

A reference for network and higher layer tests on residential gateways. It identifies detailed objectives for testing IGMP, IGMP proxy, DHCP and PPPoE functionality and conformance. These objectives are based on requirements gathered from DSL-forum and IETF.
The second part describes the test procedures in more detail, as they were used and verified while implementing an automatic test suite on residential gateways.
Table of contents of part 1 and part 2.

Management testing

A detailed mapping of the TMN management functions described in ITU-T M.3400, into an exhaustive list of requirements. The readers are made familiar with the TMN model and its management functions, so that they can understand why MUSE has chosen TMN as its reference model for testing Management systems.
Read more ....

xDSL specific testing

A comprehensive overview on the xDSL subjects that are really relevant, plus the motivation why. It covers all flavours of xDSL, and gives an up-to-date guidance through the huge amount of available tests in standards (ITU, DSLF, ETSI).
Part two refers to existing standard tests and concentrates on describing missing details. Furthermore, it adds methods on recent topics, such as means for loop qualification (DELT/SELT), mitigating impulsive noise, and protecting legacy systems (PSD shaping).
Read more ....

Fibre specific testing

A top-down view on testing of generic system aspects that is characteristic to fibre-based access networks. It concentrates on optical signals and on the performance of an access system as a whole (leaving tests for underlying optical components out of scope). Systems are differentiated with respect to their interfaces (analogue, digital, framed digital) and topologies (point-to-point, point-to-multipoint). Test methodologies are identified for each of them. Specifically, it covers tools and procedures available to measure relevant physical layer parameters, referencing appropriate standards.
Read more ....
[MUSE]

MUSE is a European consortium of vendors, operators and universities, active from January 2004 - March 2008. The aim is cooperation on research and development of future, low cost, multi-service access networks.

MUSE is partly funded from the FP6 programme of the European Commission and this Test Suite is one of its deliverables (DTF4.4).

More information on MUSE and on obtaining this Test Suite can be found on the MUSE website:

www.ist-muse.org


dec 2007



Testing Quality of Service


An important aspect of the Test Suite is the assessment of the quality of service, for various applications, as perceived by users.

First, we have selected a set of representative applications:
  • Voice-over-IP
  • Streaming Video
  • Videoconference
  • Web browsing and downloading
  • Interactive Gaming



For each application we have listed the factors that influence the perceived QoS. Following ITU-T P.800, the perceived QoS will be rated on a five point scale, the so-called Mean Opinion Score (MOS), see the table below.

MOS QUALITY IMPAIRMENT
5 Excellent Imperceptable
4 Good Perceptable, not annoying
3 Fair Slightly annoying
2 Poor Annoying
1 Bad Very annoying
Definition of MOS scale, based on Opinion Scores
MOS values can be obtained by objective methods, subjective methods or by mapping network quality to perceived QoS.

The MOS value for VoIP is obtained by measuring the listening, talking, and interaction quality.



The perceived QoS for video services is measured through the visual, audio and synchronization quality, in addition to the quality of zapping (Streaming Video) and interaction (Videoconference).



In order to assess the user experienced quality for Web browsing, we will measure Response and Download Time and map these to MOS values.

Finally, we show how MOS values can be obtained for Interactive Gaming, from network measurements of Ping and Jitter.



Testing Management systems


A systematic way of testing the management capabilities of a system as a whole is to follow the structure of the TMN model, defined by the ITU. This model provides an overall architecture framework for analying management functions (vertical view) as well as TMN layers (horizontal view).



The horizontal view follows the traditional way of describing element management functions (see figure 1). This method is the so-called FCAPS method, and separates between management functions for Faults, Configuration, Accounting, Performance and Security.

The vertical view covers various management levels, starting from the need of service providers to support their business processes, downto the management of individual network elements. The MUSE management model is a mixture of the so called eTOM model (for higher layers) and the M.3400 model (for lower layers) and distincts between Business management, Service management, Network management, Element management and the individual network elements.

The Test Suite provides the reader with an exhaustive framework for management testing, that is well prepared for future developments, and not restricted to MUSE-only architectures.

Figure 1. TMN and eTOM models


Part 1 of the Test Suite starts with the motivations behind the selection of the TMN model as a reference. It provides an introductory part, focusing on the underlying FCAPS concepts, and prepare for a detailed step-by-step Management Testing approach. It summarizes a range of test objectives, covering all capabilities needed to manage a system as a whole.

Part 2 starts with a summary of TMN management functions that are described in ITU-T M.3400. It maps all its requirements to an exhaustive list of functionalities that are to be evaluated. The structure of the proposed approach assures that no aspect should be missed. In fact, part 2 identifies all test steps needed to test the objectives summarized in part 1.




Both parts are complementary in nature, and have common goals. The aim is to validate management capabilities according to TMN model, and to provide an adequate framework for this. The approach is generic in nature (not restricted to MUSE-only), and enables a setup implementation that facilitates rapid tesing of management functions for all practical cases.



Testing xDSL-based systems


The prime objective of the xDSL chapter in part 1 is to give a comprehensive overview on testing xDSL, written in a tutorial style. The reader is guided through all the topics that are really relevant, leaving unnecessary details and complexity to the various xDSL standards. The document is supplementary to the standards; adding new tests, giving additional explanation and provides references to further reading where meaningful.


Figure 1: xDSL brings broadband connections to millions of European households.

Part 1 starts with the different approaches of analyzing xDSL systems, followed by a step-by-step guideline along relevant test objectives in terms of functional tests. Functional tests are to identify if a specific xDSL based solution can serve as an adequate transport platform for a full service, end to end access network. Possible applications for functional tests are (a) to debug solutions that are under development, (b) to identify strong and weak points of a solution to enable strategic decisions on migration scenarios, or (c) to select one solution out of many offers from different vendors.

The description of the xDSL tests objectives in part 1 has been grouped as follows:

Figure 2: Basic test set-up for testing under differential mode injected impulsive noise.

Part 2 is very different from part 1. This part is written in a procedural style for guiding the test engeneer, step by step, through the associated xDSL test methods. Its style is similar to what is common for various DSL Forum test documents, with the difference that part 2 concentrates on functional tests. The (comprehensive) DSL forum documents have in common that they mainly concentrate on conformance tests.
All objectives identified in part 1 have been elaborated in part 2.


Figure 3: Photo of an enlarged Dutch street cabinet, that demonstrates how xDSL systems are moving closer to the customer location. The enlargement was required to accomodate VDSL2 equipment.

Some of the new topics that are discussed in both parts are:
  • The need for PSD shaping to enable VDSL2 to coexist with ADSL in the same cable, and how the compliance of shaping with access rules can be verified.
  • Why the demand for transporting video services has made test of the immunity against impulse and RFI noise so important.
  • Developments on new capabilities to manage the advanced features that have been introduced since ADSL2+ and/or VDSL2. Examples are PSD shaping, Impulse noise protection and Loop diagnostics.



Testing Fibre-based systems


The organisation of tests for analyzing fiber-based systems follows a strict top-down approach. Tests that are relevant for systems that are to transport analog TV channels over fibre, may not be suitable for digital systems with passive optical point-to-multipoint (P2MP) interconnections.

To cover all these systems by a single Test Suite, the description of tests objectives has been grouped in part 1 as follows:
  • The functional tests are tests that can be applied to all fibre-based systems. They are to analyse capabilities of a system to manage and monitor the link, to survive from fibre breaks (due to redundancy) and to prevent security related issues.
  • A second group of tests are those that are dedicated to systems with specific external (electrical) interfaces. It may be obvious that systems with analog, digitial and/or framed digital interfaces have very different requirements.
  • Another group of tests are dedicated to systems with specific internal (optical) interfaces. Think of different modulation formats, wavelengths schemes and network architectures (P2P, P2MP). These tests take into account that the optical signal quality evaluations and/or the total transmission performance can deteriorate by different phenomena.
What all the above tests have in common is that they aim to analyze fibre-based networks from a system point of view. A full characterisation of individual elements (such as devices, modules and subsystems) is considered as out of scope.

Part 2 summarizes a variety of test methodologies to verify the capabilities identified in part 1. It provides the reader with many references to test methods that have been described in public documents, appropriate standards and application notes.

It elaborates on relevant topics including underlying test configurations, specific performance tests under stressed conditions (see figure 1) and signal quality concerns (e.g. burst mode transmission).
Figure 1: Impact of ORL (produced by different configurations of splitters in P2MP architecture) on ONT sensitivity (BER=10E-10)-exemplary test result. 

To enable a fibre-based network to perform above pre-defined performance requirements, it is essential to characterize several performance-limiting characteristics of the submodules within a link. This involves measurements of all kinds of key parameters of transmitters and receivers, as well as power budget calculations of the link. It also involves measurements on limiting characteristics of passive optical fibre plants, such as for instance the spectral response of optical filters (see figure 2).

Figure 2: Spectral characteristics of ODN- Transmission performance of an 18 channel CWDM filter (ITU G.694.2)

The general survey in the fibre-optic chapters provides support to vendors, operators, students, designers and experts that need to analyse access solutions as a whole rather than stand alone testing of individual components.



Table of Contents, for part 1 and 2

Part 1: Test Objectives
Part 2: Test Methods

Part 1: Test Objectives

TITLE PAGE - part 1

1

DOCUMENT INFORMATION - part 1

2

TABLE OF CONTENTS - part 1

4

EXECUTIVE SUMMARY - part 1

8

INTRODUCTION - part 1 (to part 2)

9

1.1 Scope

9

1.2 Objectives

9

1.3 Innovation – what is new?

11

1.4 History of the creation of this document

11

DEFINITIONS - part 1 (to part 2)

13

2.1 Terminology

13

2.2 Abbreviations

17

CLASSIFICATION FRAMEWORK FOR E2E TESTING - part 1

24

3.1 Purpose of this chapter

24

3.2 Classification of Services

24

3.3 Classification of Network Domains

28

3.4 Classification of Connectivity

29

3.4.1 OSI Reference Model

29

3.4.2 TCP/IP Reference Model

30

3.5 Classification of QoS

31

3.6 Classification of Management functions

32

3.6.1 TMN model

33

3.6.2 e-TOM model

33

INVENTORY OF SERVICE REQUIREMENTS - part 1

35

4.1 Introduction

35

4.2 Basic Characteristics of Services

35

4.3 Basic Requirements to Enable Quality

40

4.4 Basic Requirements to Enable Security

43

4.4.1 Unauthorised reception/obtaining of information

43

4.4.2 Unauthorised usage of resources

45

4.4.3 Disturbing the proper functioning of services, networks, applications

46

4.4.4 Untraceable or repudiated sending of ‘bad’ information

47

4.5 Basic Requirements to Enable Connectivity

48

4.6 Basic requirements on Naming and Numbering

48

4.7 Basic Requirements on AAA

50

4.7.1 Authentication requirements

50

4.7.2 Authorization requirements

50

4.7.3 Accounting requirements

51

4.8 Other Basic Requirements

52

4.9 High level requirements for advanced applications

52

4.10 “Full service” testing with a limited number of applications

56

4.10.1 E2E Service Requirements

56

4.10.2 E2E Network Requirements

58

QOS TESTING - part 1 (to part 2)

63

5.1 Introduction

63

5.2 Perceived service quality for end users

63

5.2.1 Perceived quality factors for Voice over IP

64

5.2.2 Perceived quality factors for Streaming video

65

5.2.3 Perceived quality factors for Videoconference

66

5.2.4 Perceived quality factors for Web browsing and bulk data

66

5.2.5 Perceived quality factors for Interactive gaming

67

5.3 Measuring perceived quality

67

5.3.1 Subjective measurements

68

5.3.2 Objective measurements

68

5.3.3 Using network quality to predict user-perceived quality

68

5.3.4 Hierarchy of test methods for perceived quality

69

5.3.5 Summary

70

5.4 Measuring network quality

70

5.4.1 Network QoS parameters

70

5.4.2 Measurement framework

72

5.4.3 Proposed measurements for network quality per application

72

5.5 Summary

73

5.6 Inventory of Standards

74

5.6.1 ITU

75

5.6.2 IETF

75

5.6.3 Other standardization bodies

75

5.7 References

76

SERVICE CONNECTIVITY TESTING - part 1 (to part 2)

83

6.1 Introduction

83

6.2 Test Objectives: Service VIEW

84

6.3 High-speed Internet Access Service

85

6.3.1 Description

85

6.3.2 Limiting Presumptions

85

6.3.3 Application Layer protocols

85

6.3.4 Transport Protocols

85

6.3.5 Network Protocols

85

6.3.6 High-Speed Internet Tests

86

6.4 Multicast Streaming Service

87

6.4.1 Description

87

6.4.2 Limiting presumptions

87

6.4.3 Application Layer Protocols

87

6.4.4 Transport Layer Protocols

88

6.4.5 Network Layer Protocols

88

6.4.6 Link-Layer Protocols

92

6.4.7 Multicast Streaming Tests

93

6.5 Voice over IP Service

96

6.5.1 Description

96

6.5.2 Limiting Presumptions

97

6.5.3 Application Layer Protocols

97

6.5.4 Network Layer

100

6.5.5 Voice over IP Tests

100

6.6 Inventory of standards

103

6.6.1 ITU-T

103

6.6.2 IETF

103

6.6.3 Other standardization bodies

103

NETWORK CONNECTIVITY TESTING - part 1 (to part 2)

109

7.1 MUSE Architectures

109

7.2 The MUSE reference model

110

7.3 The MUSE Data Plane

111

7.3.1 Interfaces

112

7.3.2 Data Plane TESTS

115

7.4 The MUSE Control Plane

119

7.4.1 Authentication

119

7.4.2 Auto-configuration

123

7.4.3 MUSE QoS Architecture

125

7.4.4 Control Plane TESTS

130

7.5 References

135

CONNECTIVITY TESTING OF RESIDENTIAL GATEWAYS - part 1 (to part 2)

140

8.1 Multicast test objectives

141

8.1.1 IGMPv3 Requirements

141

8.1.2 IGMPv3-Proxy Requirements

148

8.1.3 DSLForum Multicast RGW Requirements

149

8.2 DHCP test objectives

151

8.2.1 Sources

151

8.2.2 Message requirements

151

8.2.3 Client Requirements

163

8.2.4 Server Requirements

169

8.2.5 DHCP Options

173

8.3 VLAN

177

MANAGEMENT TESTING - part 1 (to part 2)

179

9.1 Selecting a reference model for MUSE Management Tests

179

9.1.1 Fault management

181

9.1.2 Configuration Management

181

9.1.3 Accounting Management

181

9.1.4 Performance Management

181

9.1.5 Security Management

182

9.2 Management Tests Objectives

182

9.2.1 Testing Fault Management

183

9.2.2 Testing Configuration Management

196

9.2.3 Testing Account Management

209

9.2.4 Testing Performance Management

218

9.2.5 Testing Security Management

226

9.3 References

233

XDSL SPECIFIC TESTING - part 1 (to part 2)

235

10.1 Introduction

235

10.1.1 Functional testing

236

10.1.2 Conformance testing

236

10.1.3 Choosing between functional and conformance testing

236

10.2 Testing management capabilities

237

10.2.1 Testing configuration management

237

10.2.2 Line configuration

237

10.2.3 Channel configuration

240

10.2.4 Testing monitoring capabilities

241

10.3 Testing Usability

243

10.4 Testing Transceiver signal characteristics

244

10.4.1 Total transmit power

244

10.4.2 Power Spectral Density

245

10.4.3 Power adaption

246

10.4.4 Power management

248

10.4.5 Power consumption

249

10.5 Testing performance under noisy stress conditions

249

10.5.1 Basic test setup and stress conditions

250

10.5.2 Testing margin under stationary cross talk noise conditions

253

10.5.3 Testing bit rate under stationary crosstalk noise conditions

254

10.5.4 Testing Reach under stationary crosstalk noise conditions

255

10.5.5 Testing performance under other noise conditions

256

10.5.6 Testing minimum required performance

257

10.6 Summary

258

10.7 References

261

FIBRE SPECIFIC TESTING - part 1 (to part 2)

263

11.1 Introduction

263

11.2 Overall testing plan

267

11.3 Functional tests

267

11.3.1 Testing management capabilities

267

11.3.2 Monitoring capabilities

268

11.3.3 Security tests

269

11.3.4 Redundancy tests

272

11.4 External interface aspects

272

11.4.1 Analogue external interfaces

273

11.4.2 Framed digital external interfaces

274

11.4.3 Digital external interfaces

276

11.5 Optical technology aspects

277

11.5.1 Element specific aspects

278

11.5.2 Optical technology specific aspects

285

11.6 Inventory of standards and other documents

293

11.7 References

294

Part 2: Test Methods

TITLE PAGE - part 2

1

DOCUMENT INFORMATION - part 2

2

TABLE OF CONTENTS - part 2

4

EXECUTIVE SUMMARY - part 2

7

INTRODUCTION - part 2 (to part 1)

8

1.1 Scope

8

1.2 Objectives

8

1.3 Innovation – what is new?

10

1.4 History of the creation of this document

10

1.5 Summary per chapter

11

DEFINITIONS - part 2 (to part 1)

14

2.1 Terminology

14

2.2 Classification Framework

18

2.3 Abbreviations

18

QOS TESTING - part 2 (to part 1)

26

3.1 Introduction

26

3.2 Pre-conditions for testing QoS

27

3.2.1 Test scenarios

27

3.2.2 Limiting other network internal pre-conditions

29

3.2.3 Terminal related issues

30

3.2.4 Determining traffic load regimes

31

3.3 Voice-over-IP

33

3.3.1 Perceived QoS for VoIP

33

3.3.2 Test methods

34

3.3.3 Test specification at application layer

35

3.3.4 Test specification at network layer

38

3.4 Streaming Video

39

3.4.1 Perceived QoS for streaming Video

39

3.4.2 Test methods

40

3.4.3 Test specification at application layer

43

3.4.4 Test specification at network layer

46

3.5 Video conference

47

3.5.1 Perceived QoS for Video conferencing

47

3.5.2 Test methods

48

3.5.3 Test specification at application layer

48

3.5.4 Test specification at network layer

51

3.6 Web browsing and downloads

52

3.6.1 Perceived QoS for web browsing and downloads

52

3.6.2 Test methods

53

3.6.3 Test specification at application layer

54

3.6.4 Test specification at network layer

57

3.7 Interactive gaming

58

3.7.1 Perceived QoS for Interactive Gaming

58

3.7.2 Test methods

59

3.7.3 Test specification at application layer

59

3.7.4 Test specification at network layer

59

3.8 Annex to Chapter 3: Sending and capturing streaming video

61

3.8.1 Sending and receiving video

61

3.9 Manual: iVQM_iptv HOWTO

63

3.9.1 Network architecture and hardware requirements

63

3.9.2 Monitoring with iVQM_iptv

67

3.10 References

70

SERVICE CONNECTIVITY TESTING - part 2 (to part 1)

74

4.1 Introduction

74

4.2 Tools

74

4.3 High-speed Internet Service

74

4.3.1 Functional Tests

74

4.4 Performance Tests

75

4.5 Multicast Streaming Service

75

4.5.1 Functional Tests

75

4.5.2 Performance Tests

80

4.6 Voice over IP Service

82

4.6.1 Functional Tests

82

4.6.2 Performance Tests

88

NETWORK CONNECTIVITY TESTING - part 2 (to part 1)

92

5.1 Introduction

92

5.2 MUSE Data Plane

92

5.2.1 Network layer

92

5.2.2 Link Layer Tests

95

5.3 MUSE Control Plane

108

5.3.1 Functional tests

108

5.3.2 Performance tests

110

CONNECTIVITY TESTING OF RESIDENTIAL GATEWAYS - part 2 (to part 1)

122

6.1 Introduction

122

6.2 Multicast test methods

122

6.2.1 Testing IGMP-Proxy Requirements

126

6.2.2 Testing DSLForum Multicast Requirements for RGW

164

6.3 DHCP test methods

171

6.3.1 DHCP Client

179

6.4 VLAN test methods

203

6.4.1 Test: upstreamvlan

203

6.4.2 Test: vlantransparant

204

MANAGEMENT TESTING - part 2 (to part 1)

205

7.1 Introduction

205

7.2 Test methodes for Muse Management tests

206

7.2.1 Test Methods for Fault Management

206

7.2.2 Test Methods for Configuration Management

218

7.2.3 Test Methods for Account Management

232

7.2.4 Test Methods for Performance Management

243

7.2.5 Test Methods for Security Management

251

7.3 References

260

XDSL SPECIFIC TESTING - part 2 (to part 1)

262

8.1 Introduction

262

8.2 Test configurations

262

8.2.1 Set-ups

263

8.2.2 Test loops

267

8.2.3 Noise sources

268

8.2.4 System configuration profiles

271

8.3 Testing management capabilities

272

8.3.1 Configuration management tests

273

8.4 Testing Transceiver Signal Characteristics

291

8.4.1 Introduction

291

8.4.2 Measurement difficulties

292

8.4.3 Testing total transmit power

297

8.4.4 Power Spectral Density

299

8.4.5 Power Back Off

302

8.4.6 Power management

305

8.5 Testing performance under noisy stress conditions

311

8.5.1 Introduction

311

8.5.2 Reported line parameter verification tests

313

8.5.3 Testing margin

315

8.5.4 Testing bit rate

317

8.5.5 Testing resistance against slowly transmission impairments

319

8.5.6 Testing bit rate using INP_min and Delay_max configuration

322

8.5.7 Testing resistance against impulsive noise

323

8.5.8 Inter comparing test results

328

8.6 Example test configurations

331

8.6.1 Example test loops

331

8.6.2 Example noise profiles for stationary crosstalk noise

331

8.6.3 Example system configuration profiles

336

8.6.4 Example PSD specifications

347

8.7 References

348

FIBRE SPECIFIC TESTING - part 2 (to part 1)

349

9.1 Introduction

349

9.2 Test configurations

349

9.3 Transceiver characteristic

352

9.3.1 Optical transmitter

353

9.3.2 Optical receiver

360

9.3.3 Optical power budget calculation

362

9.4 Testing performance under stress conditions

364

9.4.1 Determination of optical reach

364

9.4.2 Sensitivity to reflection (P2MP, no optical circulators)

366

9.4.3 Sensitivity to reflection (P2MP, with optical circulators)

368

9.4.4 Differential fibre distance

369

9.4.5 Sensitivity to crosstalk

370

9.5 Testing optical signal quality on different parts of optical path

371

9.5.2 Q-factor/BER estimation

373

9.5.3 Received optical power level

375

9.5.4 Optical path penalty

377

9.6 ODN characteristic

378

9.6.1 Insertion Loss

378

9.6.2 Optical Return Loss

379

9.6.3 Chromatic Dispersion

380

9.6.4 Polarization Mode Dispersion

382

9.6.5 Spectral Response

382

9.7 References

383

[MUSE]

Test Suite Team

This Test Suite has been created by a mixed team of experts, lead by Rob F.M. van den Brink (TNO), during a period of four years (2004-2007).

Topic Chapters Names
Taskforce leadership all Rob van den Brink (TNO).
Overall editorial work all Federiko Krommendijk (TNO), Kamal Ahmed (TNO).
"Teaser" and above "Teaser" document all Rob van den Brink (TNO). with input from all
Introduction 1 1 Rob van den Brink (TNO).
Definitions 2 2 all
Classification 3 × Hervé le Bihan (THO), Rob Kooij (TNO), António Gamelas (PT), Gilbert le Houerou (FT), Federiko Krommendijk (TNO), Rob van den Brink (TNO), Pieter Liefooghe (IMEC), Brecht Vermeulen (IMEC).
Service requirements 4 × Hervé le Bihan (THO), Rob van den Brink (TNO), Georgina Galizzo (TID) + QoS-team (chapter 5).
QoS 5 3 Rob Kooij (TNO), Jeroen van Vugt (TNO), Kamal Ahmed (TNO), Kjell Brunnström (ACR), Tanja Kauppinen (ACR), Stéphane Junique (ACR), Georgina Gallizo (TID), António Gamelas (PT).
Service connectivity 6 4 Pieter Liefooghe (IME), Brecht Vermeulen (IME), Björn Nagel (DT), Arnaud Riaudel (FT), Frank Geilhardt (DT).
Network connectivity 7 5 Pieter Liefooghe (IME), Brecht Vermeulen (IME), Björn Nagel (DT), Arnaud Riaudel (FT), Frank Geilhardt (DT).
Residential gateways 8 6 Johannes Deleu (IBBT), Brecht Vermeulen (IBBT), Alex De Smedt (THO).
Management systems 9 7 Gilbert le Houerou (FT), Hélder Alves (PTI), Federiko Krommendijk (TNO), Harold Balemans (LuNL),  Bruno Veloso (PTI), Rob van den Brink (TNO).
xDSL-based systems 10 8 Bas Gerrits (TNO), Rob van den Brink (TNO), Mauro Tilocca (TI).
Fibre-based systems 11 9 Andrzej Mosek (TP), Marcin Ratkiewicz (TP), Joachim Vathke (HHI), Kai Habel (HHI), Klaus-Dieter Langer (HHI)