Chapter OneIntroduction to computer relaying
1.1 Development of computer relaying
The field of computer relaying started with attempts to investigate whether power system relaying functions could be performed with a digital computer. These investigations began in the 1960s, a period during which the digital computer was slowly and systematically replacing many of the traditional tools of analytical electric power engineering. The short circuit, load flow, and stability problems - whose solution was the primary preoccupation of power system planners - had already been converted to computer programs, replacing the DC boards and the Network Analyzers. Relaying was thought to be the next promising and exciting field for computerization. It was clear from the outset that digital computers of that period could not handle the technical needs of high speed relaying functions. Nor was there any economic incentive to do so. Computers were orders of magnitude too expensive. Yet, the prospect of developing and examining relaying algorithms looked attractive to several researchers. Through such essentially academic curiosity this very fertile field was initiated. The evolution of computers over the intervening years has been so rapid that algorithmic sophistication demanded by the relaying programs has finally found a correspondence in the speed and economy of the modern microcomputer; so that at present computer relays offer the best economic and technical solution to the protection problems - in many instances the only workable solution. Indeed, we are at the start of an era in which computer relaying has become routine, and it has further influenced the development of effective tools for real-time monitoring and control of power systems.
In this chapter we will briefly review the historical developments in the field of computer relaying. We will then describe the architecture of a typical computer based relay. We will also identify the critical hardware components, and discuss the influence they have on the relaying tasks.
1.2 Historical background
One of the earliest published papers on computer relaying explored the somewhat curious idea that relaying of all the equipment in a substation would be handled by a single computer. No doubt this was motivated by the fact that computers were very expensive at that time (1960s), and there could be no conceivable way in which multiple computers would be economically palatable as a substitute for conventional relays which were at least one order of magnitude less expensive than a suitable computer. In addition, the computation speed of contemporary computers was too slow to handle high speed relaying, while the power consumption of the computers was too high. In spite of these obvious shortcomings - which reflected the then current state of computer development - the reference cited above explored several protection algorithmic details thoroughly, and even today provides a good initiation to the novice in the complexities of modern relaying practices.
Several other papers were published at approximately the same time, and led to the algorithmic development for protection of high voltage transmission lines. It was recognized early that transmission line protection function (distance relaying in particular) - more than any other - is of greatest interest to relay engineers because of its widespread use on power systems, its relatively high cost, and its functional complexity. These early researchers began a study of distance protection algorithms which continues unabated to this day. These studies have led to important new insights into the physical nature of protection processes and the limits to which they can be pushed. It is quite possible that distance relaying implementation on computers has been mastered by most researchers by now, and that any new advances in this field are likely to come from the use of improved computer hardware to implement the well-understood distance relaying algorithms. An entirely different approach to distance relaying has been proposed during recent years. It is generally based upon the utilization of traveling waves initiated by a fault to estimate the fault distance. Traveling wave relays require relatively high frequencies for sampling voltage and current input signals. Although traveling wave relays have not offered compelling advantages over other relaying principles in terms of speed and accuracy of performance, they have been applied in a few instances around the world with satisfactory performance. This technique will be covered more fully in Chapter 9; it remains for the present a somewhat infrequently used relaying application. Fault location algorithms based on traveling waves have also been developed and there are reports of good experience with these devices. These too will be covered more fully in Chapter 9.
In addition to the development of distance relaying algorithms, work was begun early on apparatus protection using the differential relaying principle. These early references recognize the fact that compared to the line relaying task, differential relaying algorithms are less demanding of computational power. Harmonic restraint function adds some complexity to the transformer protection problem, and problems associated with current transformer saturation or other inaccuracies continue to have no easy solutions in computer based protection systems just as in conventional relays. Nevertheless, with the algorithmic development of distance and differential relaying principles, one could say that the ability of computer based relays to provide performance at least as good as conventional relays had been established by the early 1970s.
Very significant advances in computer hardware had taken place since those early days. The size, power consumption, and cost of computers had gone down by orders of magnitude, while simultaneously the speed of computation increased by several orders. The appearance of 16 bit (and more recently of 32 bit) microprocessors and computers based upon them made high speed computer relaying technically achievable, while at the same time cost of computer based relays began to become comparable to that of conventional relays. This trend has continued to the present day - and is bound to persist in the future - although perhaps at not quite as precipitous a rate. In fact, it appears well established by now that the most economical and technically superior way to build relay systems of the future (except possibly for some functionally simple and inexpensive relays) is with digital computers. The old idea of combining several protection functions in one hardware system has also re-emerged to a certain extent - in the present day multi-function relays.
With reasonable prospects of having affordable computer relays which can be dedicated to a single protection function, attention soon turned to the opportunities offered by computer relays to integrate them into a substation-wide, perhaps even a system-wide, network using high-speed wide-band communication networks. Early papers on this subject realized several benefits that would flow from this ability of relays to communicate. As will be seen in Chapters 8 and 9 integrated computer systems for substations which handle relaying, monitoring, and control tasks offer novel opportunities for improving overall system performance by exchanging critical information between different devices.
1.3 Expected benefits of computer relaying
It would be well to summarize the advantages offered by computer relays, and some of the features of this technology which have required new operational considerations. Among the benefits flowing from computer relays are:
All other things being equal, the cost of a relay is the main consideration in its acceptability. In the early stages of computer relaying, computer relay costs were 10 to 20 times greater than the cost of conventional relays. Over the years, the cost of digital computers has steadily declined; at the same time their computational power (measured by instruction execution time and word length) has increased substantially. The cost of conventional (analog) relays has steadily increased over the same period, primarily because of some design improvements, but also because of general inflation and a relatively low volume of production and sales. It is estimated that for equal performance the cost of the most sophisticated digital computer relays (including software costs) would be about the same as that of conventional relaying systems. Clearly there are some conventional relays - overcurrent relays are an example - which are so inexpensive that cheaper computer relays to replace them seem unlikely at present, unless they are a part of a multi-function relay. However, for major protection systems, the competitive computer relay costs have definitely become an important consideration.
1.3.2 Self-checking and reliability
A computer relay can be programmed to monitor several of its hardware and software subsystems continuously, thus detecting any malfunctions that may occur. It can be designed to fail in a safe mode - i.e. take itself out of service if a failure is detected - and send a service request alarm to the system center. This feature of computer relays is perhaps the most telling technical argument in favor of computer relaying. Misoperation of relays is not a frequent occurrence, considering the very large number of relays in existence on a power system. On the other hand, in most cases of power system catastrophic failures the immediate cause of the escalation of events that leads to the failure can be traced to relay misoperation. In some cases, it is a mis-application of a relay to the given protection task, but in a majority of cases it is due to a failure of a relay component that leads to its misoperation and the consequent power system breakdown. It is expected that with the self-checking feature of computer based relays, the relay component failures can be detected soon after they occur, and could be repaired before they have a chance to misoperate. In this sense, although computer based relays are more complex than electromechanical or solid state relays (and hence potentially more likely to fail), as a system they have a higher rate of availability. Of course, a relay cannot detect all component failures - especially those outside the periphery of the relay system.
1.3.3 System integration and digital environment
Digital computers and digital technology have become the basis of most systems in substations. Measurements, communication, telemetry and control are all computer based functions. Many of the power transducers (current and voltage transformers) are in the process of becoming digital systems. Fiber optic links, because of their immunity to Electromagnetic Interference (EMI), are likely to become the medium of signal transmission from one point to another in a substation; it is a technology particularly suited to the digital environment. In substations of the future, computer relays will fit in very naturally. They can accept digital signals obtained from newer transducers and fiber optic channels, and become integrated with the computer based control and monitoring systems of a substation. As a matter of fact, without computer relaying, the digital transducers and fiber optic links for signal transmission would not be viable systems in the substation.
1.3.4 Functional flexibility and adaptive relaying
Since the digital computer can be programmed to perform several functions as long as it has the input and output signals needed for those functions, it is a simple matter to the relay computer to do many other substation tasks. For example, measuring and monitoring flows and voltages in transformers and transmission lines, controlling the opening and closing of circuit breakers and switches, providing backup for other devices that have failed, are all functions that can be taken over by the relay computer. The relaying function calls for intensive computational activity when a fault occurs on the system. This intense activity at best occupies the relaying computer for a very small fraction of its service life - less than a tenth of a percent. The relaying computer can thus take over these other tasks at practically no extra cost.
With the programmability and communication capability, the computer based relay offers yet another possible advantage that is not easily realizable in a conventional system. This is the ability to change relay characteristics (settings) as system conditions warrant it. More will be said about this aspect (adaptive relaying) in Chapter 10.
The high expectations for computer relaying have been mostly met in practical implementations. It is clear that most benefits of computer relaying follow from the ability of computers to communicate with various levels of a control hierarchy. The full flowering of computer relaying technology therefore has only been possible with the arrival of an extensive communication network that reaches into major substations. Preferably, the medium of communication would be fiber optic links with their superior immunity to interference, and the ability to handle high-speed high-volume data. It appears that the benefits of such a communication network would flow in many fields, and as more such links become available, the computer relays and their measurement capabilities become valuable in their own right. Where extensive communication networks are not available, many of the expected benefits of computer relaying must remain unrealized.
Other issues which are specific to computer relaying technology should also be mentioned. It has been noted that digital computer technology has advanced at a very rapid pace over the last twenty years. This implies that computer hardware has a relatively short lifespan. The hardware changes significantly every few years, and the question of maintainability of old hardware becomes crucial. The existing relays have performed well for long periods - some as long as 30 years or more. Such relays have been maintained over this period. It is difficult to envision a similar lifespan for computer based equipment. Perhaps a solution lies in the modularity of computer hardware; computers and peripherals belonging to a single family may provide a longer service life with replacements of a few modules every few years. As long as this can be accomplished without extensive changes to the relaying system, this may be an acceptable compromise for long service life. However, the implications of rapidly changing computer hardware systems are evident to manufacturers and users of this technology.
Software presents problems of its own. Computer programs for relaying applications (or critical parts of them) are usually written in lower level languages, such as assembly language. The reason for this is the need to utilize the available time after the occurrence of a fault as efficiently as possible. Relaying programs tend to be computation and input-output bound. The higher level languages tend to be inefficient for time-sensitive applications. It is possible that in time, with computer instruction times becoming faster, the higher level languages could replace much of the assembly language programming in relaying computers. The problem with machine level languages is that they are not transportable between computers of different types. Some transportability between different computer models of the same family may exist, but even here it is generally desirable to develop new software in order to take advantage of differing capabilities among the different models. Since software costs are a very significant part of computer relaying development, non-transferability of software is a significant problem.
In the early period of computer relaying development, there was some concern about the harsh environment of electric utility substations, in which the relays must function. Extremes of temperature, humidity, pollution as well as very severe EMI must be anticipated.