# MEND

<figure><img src="https://3121926949-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FK2Vdw69n5ipoU5KHLczg%2Fuploads%2FwN83juUcQpJ4fYi4jZiJ%2FMEND.png?alt=media&#x26;token=c943d528-81a9-4986-a868-a48f0d6f95cf" alt=""><figcaption><p>Paper: Fast Model Editing at Scale</p></figcaption></figure>

### MendRewriteExecutor

> `MendRewriteExecutor` is the class for apply MEND to your model, it employs a hyper network to learn the necessary `delta` for editing the language model.

#### **init\_model()** ->

> MEND requires a pre-trained specific model structure and weights (use Trainer)

```python
def init_model(self, model, tok, params: MENDHyperParams)
```

* **Paramters**
  * model(<mark style="color:purple;">PreTrainedModel</mark>): model to be edited
  * tok(<mark style="color:purple;">PreTrainedTokenizer</mark>): tokenizer for inputs
  * params(<mark style="color:purple;">Hyperparams</mark>): hyperparameters for editing method
* **Return Type**

#### apply\_to\_model()-> *PreTrainedModel*

> Main function: Given the request, it applies mend to your model. Return the changed weights of the model.

```python
def apply_to_model(
    self,
    model: AutoModelForCausalLM,
    tok: AutoTokenizer,
    requests: List[Dict],
    hparams: MENDHyperParams,
    copy=False,
    return_orig_weights=False,
    keep_original_weight=False,
    **kwargs
):
```

* **Paramters**
  * model(<mark style="color:purple;">PreTrainedModel</mark>): model to be edited
  * tok(<mark style="color:purple;">PreTrainedTokenizer</mark>): tokenizer for inputs
  * requests(<mark style="color:purple;">List\[Dict]</mark>): The edit descriptors and targets.
  * hparams(<mark style="color:purple;">Hyperparams</mark>): hyperparameters for editing method
  * copy(<mark style="color:purple;">bool</mark>): whether to copy original model
  * return\_orig\_weights(<mark style="color:purple;">bool</mark>): whether to return the weights of original model
  * keep\_original\_weight(<mark style="color:purple;">bool</mark>): whether to edit sequentially
    * `False`: edit sequentially(because the original weight is not maintained after each edit)
    * `True`: not edit sequentially
* **Return Type**
  * edited\_model(<mark style="color:purple;">PreTrainedModel</mark>): model weights after editing

#### Example

<pre class="language-python"><code class="lang-python">// ...
hparams = MENDHyperaParams.from_hparams("llama-7b.yaml")
editor = BaseEditor.from_hparams(hparams)
prompts = ['What university did Watts Humphrey attend?',
    'Which family does Ramalinaceae belong to',
    'What role does Denny Herzig play in football?'
<strong>]
</strong>target_new = ['University of Michigan',
    'Lamiinae',
    'winger'
]
metrics, edited_model, _ = editor.edit(
    prompts=prompts,
    target_new=target_new,
    keep_original_weight=True
)
</code></pre>
