# Editor

### BaseEditor

> `BaseEditor` is the class for factual and generation editing. Given the edit descriptor and the edit target, you can use different editing methods to change the behavior of the model.

#### **from\_hparams()** -> *BaseEditor*

> Static method, editor can be initialized through this function

```python
def from_hparams(cls, hparams: HyperParams):
    return cls(hparams)
```

* **Paramters**
  * hparams(<mark style="color:purple;">Hyperparams</mark>): hyperparameters for editing method
* **Return Type**
  * editor(<mark style="color:purple;">BaseEditor</mark>): The Editor class defined by `hparams`

#### **edit()**-> *List\[<mark style="color:purple;">Dict</mark>]*

> Main function: do fact editing according to the selected editing method

```python
def edit(self,
    prompts: Union[str, List[str]],
    target_new: Union[str, List[str]],
    ground_truth: Optional[Union[str, List[str]]],
    rephrase_prompts: Optional[Union[str, List[str]]] = None,
    locality_inputs:  Optional[Dict] = None,
    portability_inputs: Optional[Dict] = None,
    keep_original_weight=False,
    verbose=True,
    **kwargs
    )
```

* **Paramters**
  * prompts(<mark style="color:purple;">Union\[str, List\[str]]</mark>): The prompt string of edit descriptor
  * target\_new(<mark style="color:purple;">Union\[str, List\[str]]</mark>): The prompt string of edit target
  * ground\_truth(<mark style="color:purple;">Optional\[Union\[str, List\[str]]]</mark>): The original model output of the edit descriptor(you can set it `None`)
  * rephrase\_prompts(<mark style="color:purple;">Optional\[Union\[str, List\[str]]]</mark>): The rephrase prompt string, semantically similar to `prompts`, in order to test Generalization
  * locality\_inputs(<mark style="color:purple;">Optional\[Dict]</mark>): For each measurement dimension, you need to provide the corresponding prompt and its corresponding ground truth. Test Locality.
  * portability\_inputs(<mark style="color:purple;">Optional\[Dict]</mark>): Similar to `locality_inputs`, in order to test Portability
  * keep\_original\_weight(<mark style="color:purple;">bool</mark>): whether to edit sequentially
    * `False`: edit sequentially(because the original weight is not maintained after each edit)
    * `True`: not edit sequentially
  * verbose(<mark style="color:purple;">bool</mark>): whether to print intermediate output
* **Return Type**
  * metrics(<mark style="color:purple;">List\[Dict]</mark>): the metric for model editing(see this [link](https://github.com/zjunlp/EasyEdit#evaluation-1) for more details)
  * edited\_model(<mark style="color:purple;">PreTrainedModel</mark>): model weights after editing

#### batch\_edit()->*List\[<mark style="color:purple;">Dict</mark>]*

> Main function: do fact editing according to the selected editing method

```python
def batch_edit(self,
    prompts: List[str],
    target_new: List[str],
    ground_truth: Optional[List[str]] = None,
    rephrase_prompts: Optional[List[str]] = None,
    locality_prompts: Optional[List[str]] = None,
    locality_ground_truth: Optional[List[str]] = None,
    keep_original_weight=False,    
    verbose=True,
    **kwargs
    )
```

* **Paramters**
  * prompts(<mark style="color:purple;">Union\[str, List\[str]]</mark>): The prompt string of edit descriptor
  * target\_new(<mark style="color:purple;">Union\[str, List\[str]]</mark>): The prompt string of edit target
  * ground\_truth(<mark style="color:purple;">Optional\[Union\[str, List\[str]]]</mark>): The original model output of the edit descriptor(you can set it `None`)
  * rephrase\_prompts(<mark style="color:purple;">Optional\[Union\[str, List\[str]]]</mark>): The rephrase prompt string, semantically similar to `prompts`, in order to test Generalization
  * locality\_inputs(<mark style="color:purple;">Optional\[Dict]</mark>): For each measurement dimension, you need to provide the corresponding prompt and its corresponding ground truth. Test Locality.
  * portability\_inputs(<mark style="color:purple;">Optional\[Dict]</mark>): Similar to `locality_inputs`, in order to test Portability
  * keep\_original\_weight(<mark style="color:purple;">bool</mark>): whether to edit sequentially
    * `False`: edit sequentially(because the original weight is not maintained after each edit)
    * `True`: not edit sequentially
* **Return Type**
  * metrics(<mark style="color:purple;">List\[Dict]</mark>): the metric for model editing(see this [link](https://github.com/zjunlp/EasyEdit#evaluation-1) for more details)
  * edited\_model(<mark style="color:purple;">PreTrainedModel</mark>): model weights after editing

#### Example

```python
hparams = MENDHyperParams.from_hparams('./hparams/MEND/gpt2-xl')
editor = BaseEditor.from_hparams(hparams)
metrics, edited_model, weight_copy= editor.edit(
        prompts='What university did Watts Humphrey attend?' if prompts is None else prompts,
        ground_truth='Illinois Institute of Technology' if ground_truth is None else ground_truth,
        target_new='University of Michigan' if target_new is None else target_new,
        keep_original_weight=True,
)
```
